<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>bdb0ae7b-4d3</externalid>
      <Title>Laboratory Sciences Apprentice - Durham, NC - Expression of Interest  (Evergreen)</Title>
      <Description><![CDATA[<p>Join AstraZeneca&#39;s Durham, NC site as a full-time Laboratory Scientist Apprentice and embark on a 2-year apprenticeship program registered with the US Department of Labor.</p>
<p>This program offers a blend of paid, hands-on experience and structured learning through in-house training and classroom instruction at Durham Technical Community College.</p>
<p>As an AstraZeneca apprentice, you will be a full-time employee and receive pay for both on-the-job training and academic coursework. A typical schedule includes 24–32 hours per week onsite at AstraZeneca, with the remaining hours spent in classes at Durham Technical Community College, based on the academic calendar.</p>
<p>You will join the Analytical Development team, supporting analytical chemistry laboratories and performing testing and studies that help ensure the quality of medicines produced onsite in Durham and across our global network.</p>
<p>Responsibilities:</p>
<ul>
<li>Support day-to-day laboratory operations, including ordering supplies, maintaining equipment, conducting instrument checks, managing samples and materials, and maintaining visual dashboards for lab readiness</li>
<li>Prepare solutions and samples, and perform routine analytical testing</li>
<li>Assist with analytical method development and laboratory studies</li>
<li>Maintain accurate documentation and data in compliance with cGMP requirements</li>
<li>Contribute to continuous improvement activities to enhance laboratory efficiency</li>
</ul>
<p>What you will learn:</p>
<ul>
<li>Chromatographic techniques and HPLC/UPLC and data systems (e.g., Empower)</li>
<li>cGMP laboratory practices and quality standards in a regulated environment</li>
<li>Use of electronic lab notebooks, and digital and AI-enabled tools</li>
<li>Application of Lean principles and continuous improvement in daily work</li>
</ul>
<p>Education &amp; Coursework:</p>
<p>AstraZeneca partners with Durham Technical Community College to provide classroom instruction and laboratory training as part of the apprenticeship. Coursework supports the development of core scientific and technical skills, and successful completion of the associate degree is required to earn the apprenticeship certificate.</p>
<p>Minimum Requirements:</p>
<ul>
<li>High school diploma or GED (must be earned by start date)</li>
<li>0 years of related work experience required</li>
<li>Willingness to complete a 30-minute online assessment</li>
<li>Ability to stand for prolonged periods</li>
<li>Proficiency in English (ability to read, write, and understand documents)</li>
<li>Basic computer skills, including Microsoft Word, Excel, PowerPoint, and Outlook</li>
<li>Ability to commute to AstraZeneca site and Durham Technical Community College</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Currently enrolled in an Associate degree program in a science-related field</li>
<li>Currently enrolled in or have completed General Chemistry I and/or II (or equivalent coursework)</li>
</ul>
<p>To apply, please register your interest in the program and receive notifications when the formal application window opens.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Chromatographic techniques, HPLC/UPLC and data systems, cGMP laboratory practices, Electronic lab notebooks, Digital and AI-enabled tools</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Analytical Development</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca is a multinational pharmaceutical company that develops and manufactures medicines for various diseases.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689841032</Applyto>
      <Location>Durham, North Carolina, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6e7c226-e8c</externalid>
      <Title>Technical Lead, MFT MDE Analytics Engineering</Title>
      <Description><![CDATA[<p>The SPEED Market Data team at Equity IT is seeking a hands-on Technical Lead to own and drive a critical workstream focused on architecting, implementing, monitoring, and supporting low-latency C++ systems. As a Technical Lead, you will shape the future of the industry by working alongside exceptional engineers and strategists to solve significant engineering problems.</p>
<p>We are looking for a strong technical leader with financial markets technology experience and real-time market data expertise to design, build, and support our global real-time market data platform. This role emphasizes technical leadership, architectural ownership, and cross-team coordination rather than people management.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Act as the technical owner for a major market data workstream, setting technical direction, defining architecture, and driving execution across the full lifecycle.</li>
<li>Collaborate with hardware and software teams across divisions to design and build real-time market data processing and distribution systems.</li>
<li>Lead and drive new technical initiatives for the team, including evaluating technologies, defining standards, and establishing best practices.</li>
<li>Design and develop systems, interfaces, and tools for historical market data and trading simulations that increase research productivity.</li>
<li>Architect and implement components of an enterprise market data platform, including components for caching, aggregation, conflation and value-added data enrichment.</li>
<li>Optimise platform performance using network and systems programming, and advanced low-latency techniques (CPU, NIC, kernel, and application-level tuning).</li>
<li>Lead the design and maintenance of automated test and benchmark frameworks, and tools for risk management, performance tracking, and system validation.</li>
<li>Provide technical leadership for the support and operation of both enterprise real-time market data environments, including coordinating internal, vendor, and exchange-driven changes.</li>
<li>Design and engineer components to automate support and management of the market data platform, including monitoring, real-time and historical metrics collection/visualisation, and self-service administrative/user tools.</li>
<li>Serve as a primary technical liaison for users of the market data environment (Portfolio Managers, trading desks, and core technology teams), translating requirements into robust technical solutions.</li>
<li>Lead the enhancement of processes and workflows for operating the market data platform (release/deployment, incident management and remediation, exchange notification handling, defining and enforcing SLAs).</li>
<li>Mentor and influence other engineers through code reviews, design reviews, and hands-on guidance, fostering a culture of technical excellence and accountability.</li>
</ul>
<p>Qualifications / Skills Required:</p>
<ul>
<li>Degree in Computer Science or a related field with a strong background in data structures, algorithms, and object-oriented programming in modern C++.</li>
<li>Deep understanding of Linux system internals and networking, especially in low-latency and high-throughput environments.</li>
<li>Strong knowledge of CPU architecture and the ability to leverage CPU capabilities for performance optimisation.</li>
<li>Demonstrated experience acting as a technical lead or senior engineer owning complex systems or workstreams end-to-end (design, delivery, and operations).</li>
<li>Able to prioritise and make trade-offs in a fast-moving, high-pressure, constantly changing environment; strong sense of urgency, ownership, and follow-through.</li>
<li>Strong belief in and practice of extreme ownership, with a track record of taking accountability for systems in production.</li>
<li>Effective communication and stakeholder management skills: able to work closely with business and technology users, understand their needs, and drive appropriate technical solutions.</li>
<li>Experience building solutions on cloud environments such as GCP and AWS.</li>
<li>Knowledge of additional programming languages such as Java, Python, or scripting (Perl, shell).</li>
<li>Technical background in application development on complex market data systems (e.g., Bloomberg, Thomson Reuters, etc.).</li>
<li>Experience supporting market data environments within a global organisation, including internally developed DMA feed handlers and distribution infrastructure.</li>
<li>Strong understanding of market data concepts and functionality, including data models (fields/messages), protocols (e.g., snapshot + delta), order book representations (L1/L2/L3), recovery, and reliability.</li>
<li>Hands-on Site Reliability Engineering or DevOps experience, including system administration, automation, measurement, and release/deployment management.</li>
<li>Experience with monitoring, metrics, and command/control tooling for distributed market data platforms, with the ability to evaluate existing solutions and drive enhancements across development and operations.</li>
<li>Ability to operate with a high level of thoroughness and attention to detail, demonstrating strong ownership of deliverables and production systems.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>C++, Linux system internals, Networking, CPU architecture, Object-oriented programming, Cloud environments, Java, Python, Scripting, Market data systems, Site Reliability Engineering, DevOps, Monitoring, Metrics, Command/control tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides services to the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954905529</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aa5f286d-ad4</externalid>
      <Title>Senior Genome Editing Digital Pipeline Scientist</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Senior Genome Editing Digital Pipeline Scientist to drive the data vision that powers next-generation gene-edited products. As a Data Strategy &amp; Pipeline Leader in Gene Editing, you will coordinate a holistic data strategy across the editing pipeline so that diverse genomic and biological datasets are connected, accessible, and ready for advanced analytics. You will work closely with multi-functional teams to ensure that data, models, and decision tools are seamlessly integrated into product development workflows, enabling faster, more informed decisions and impactful innovation in gene-edited germplasm.</p>
<p>Your primary responsibilities will include providing leadership to define and coordinate the data strategy that enables data-driven, model-based analytics for improved gene-edited germplasm, including accelerating data connectivity across the editing pipeline with multi-functional teams. You will also lead cross-functional projects with partners across Crop Science to automate decision making and connect data assets that accelerate development of gene-edited products.</p>
<p>In addition, you will translate complex business data knowledge, scientific workflows, and product needs into clear technical implementation plans that can be executed by data scientists, data engineers, and developers. You will design and guide the development of robust data systems and analytics pipelines that support a wide variety of genomic and computational biology use cases and can scale with future business needs.</p>
<p>As a key communicator and integrator between scientific, technical, and business stakeholders, you will align roadmaps, prioritize initiatives, and ensure that data and analytics solutions deliver measurable value. You will also attract, mentor, and develop talent, serving as a coach for peers and colleagues in key areas of expertise to support their professional growth and build a strong data and analytics community.</p>
<p>Finally, you will champion and support Health, Safety &amp; Environment, Compliance, Business Conduct, and Human Rights policies and culture in all activities and collaborations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or a related scientific field, Minimum of 6 years of relevant experience, or MS with 10+ years of experience, Experience in the analysis of large biological datasets and in developing analytical pipelines using Python, R, or similar software and programming languages, Ability to design and implement data systems and analytical pipelines that can support a broad range of scientific and business use cases, Strong collaboration skills, demonstrated through building cross-functional partnerships and influencing others to drive results and solve complex business problems, Strong understanding of the genomic control of physiological and biochemical pathways in plants or animals, Experience developing data systems and analytical pipelines that leverage genome-wide association (GWA) data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, and pedigree data</Skills>
      <Category>Engineering</Category>
      <Industry>Life Sciences</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a global presence.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976715204</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2aae11e-f20</externalid>
      <Title>Sr Genome Editing Operations Scientist</Title>
      <Description><![CDATA[<p>As a Genome Editing Operations Scientist at Bayer Crop Science, you will guide the development of an increasingly efficient gene editing pipeline by building connected data systems that drive decisions. You will connect disparate data sources and leverage key advancement data to group projects, reagents, and samples, using this connected data system to deliver models that optimize resource use and pipeline capacity by integrating data awareness across lab, greenhouse, and field operations.</p>
<p>Your primary responsibilities will be to:</p>
<ul>
<li>Guide the development of highly connected data systems that enable data-driven, model-based analytics to improve pipeline effectiveness and efficiency;</li>
<li>Work with multifunctional teams to enable data connectivity across the editing pipeline, integrating information from lab, greenhouse, and field operations;</li>
<li>Collaborate with partner teams across Crop Science (Gene Editing, IT Enterprise, Data and Engineering) to automate decision making and improve operational efficiency to accelerate development of gene-edited products;</li>
<li>Serve as a key communicator translating business data knowledge and operational workflows into clear technical implementation plans for data scientists, data engineers, and developers;</li>
<li>Demonstrate autonomy in building relationships and networks within your unit and across functions, most often with members of the Crop Genome Editing team and closely aligned partner teams;</li>
<li>Act as a consultant to leadership and colleagues on digital strategy and data-driven operations through clear, organized, and influential communication;</li>
<li>Actively build your own acumen in biology, genome design, and digital operations while sharing best practices and learnings with the broader Biology and Genome Design community.</li>
</ul>
<p>We seek an incumbent who possesses the following qualifications:</p>
<ul>
<li>PhD in Computational Biology, Computer Science and Engineering, or another relevant scientific field with a minimum of 6 years of relevant experience, or MS with 10+ years of relevant experience;</li>
<li>Demonstrated track record developing data systems and pipelines that enable efficient product delivery and operational modeling;</li>
<li>Demonstrated experience working collaboratively in cross-functional and cross-cultural teams to achieve common goals;</li>
<li>Demonstrated experience leading and influencing activities of cross-functional teams without direct reporting relationships;</li>
<li>Ability to lead and influence key stakeholders through challenges and opportunities and to facilitate solutions.</li>
</ul>
<p>Preferred qualifications include experience building data pipelines as a ML DevOps Engineer or Data Engineer, experience with Operations Research, and experience analyzing large biological datasets and developing analytical pipelines using Python, R, or similar software and languages.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>Computational Biology, Computer Science and Engineering, Data Systems, Pipeline Development, Collaboration, Communication, Digital Strategy, Data-Driven Operations, ML DevOps Engineer, Data Engineer, Operations Research, Python, R, Cloud Development Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer Crop Science develops crop protection and biotechnology products for agriculture.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976597728</Applyto>
      <Location>Chesterfield</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3f2cb60f-80a</externalid>
      <Title>Senior Genome Editing Digital Enablement</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Senior Genome Editing Digital Enablement Scientist to join our team. As a key partner and enabler of multi-disciplinary teams, you will design large-scale data systems and analytical pipelines that power our gene editing efforts. You will develop analytical tools that connect biological and operations data to support more efficient and accurate decisions across the gene editing pipeline. Your expertise in both computational biology and genetics will be essential in driving and coordinating multi-functional teams to enable robust data connectivity and interoperability across the editing pipeline.</p>
<p>In this role, you will lead cross-functional projects with IT, Data Engineering, Genome Editing, and other partner teams to automate decision making and connect data to accelerate development of gene-edited products. You will translate complex biological processes into scalable digital workflows that support decision making, advancement, and prioritization within the gene editing program. Your strong ability to collaborate and lead in cross-functional, multi-disciplinary teams will be crucial in influencing without authority and aligning diverse stakeholders around shared digital solutions.</p>
<p>As a member of the Biology and Genome Design community, you will actively build your own acumen and capabilities while sharing best practices with others. You will serve as a key communicator and thought partner on digital enablement strategy, clearly articulating requirements, trade-offs, and opportunities to scientific and non-scientific stakeholders.</p>
<p>We seek an incumbent who possesses a PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or another relevant scientific field with a minimum of 6 years of relevant experience, or an MS with 10+ years of experience developing data systems and analytics pipelines that enable product delivery using genetic and computational biology datasets.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>computational biology, genetics, data systems, analytical pipelines, Python, R, large-scale biological datasets, genome-wide association GWAs data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, pedigree data</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer Crop Science is a leading provider of crop protection and seed solutions.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976613783</Applyto>
      <Location>Chesterfield</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88132c81-446</externalid>
      <Title>Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>
<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>
<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>
<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>
<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization &amp; deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649903005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2e7ae82-8ff</externalid>
      <Title>Sr. Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>As a Senior Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems using the Databricks Data Intelligence Platform. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary goal will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>
<p>This is a hybrid technical and commercial role, requiring you to drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be responsible for becoming the post-sale technical lead across all Databricks products, using your skills and technical credibility to engage and communicate at all levels with an organisation.</p>
<p>Your impact will be significant, as you will be engaged with Solutions Architects to understand the full use case demand plan for prioritised customers, lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts, and be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritised customers</li>
<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders</li>
<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>
<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover main use cases moving from &#39;win&#39; to production, enablement/user growth plan, product adoption, organic needs for current investment, executive and operational governance, and providing internal and external updates</li>
</ul>
<p>To succeed in this role, you will need to have 10+ years of experience in technical project or program delivery within the domain of Data and AI, with a strong understanding of solution architecture related distributed data systems, programming experience in Python, SQL, or Scala, and experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Solution architecture, Distributed data systems, Customer-facing pre-sales, Technical architecture, Customer success, Consulting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8342273002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2f6f807-fc6</externalid>
      <Title>Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>
<p>We are looking for a software engineer to join our team as a founding member of our Belgrade site. As a software engineer, you will be involved in the entire development cycle and exemplify all core Databricks values.</p>
<p>The responsibilities you will have:</p>
<ul>
<li>Drive requirements clarity and design decisions for ambiguous problems</li>
<li>Produce technical design documents and project plans</li>
<li>Develop new features</li>
<li>Mentor more junior engineers</li>
<li>Test and rollout to production, monitoring</li>
</ul>
<p>What we look for:</p>
<ul>
<li>BS in Computer Science or equivalent practical experience in databases or distributed systems</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Motivated by delivering customer value and impact</li>
<li>3+ years of production level experience in either Java, Scala or C++</li>
<li>Solid foundation in algorithms and data structures and their real-world use cases</li>
<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop)</li>
</ul>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8012691002</Applyto>
      <Location>Belgrade, Serbia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d9b7d5ae-6bf</externalid>
      <Title>Software Engineer, Distributed Systems</Title>
      <Description><![CDATA[<p>We&#39;re growing our team of passionate creatives and builders on a mission to make design accessible to all. Our platform helps teams bring ideas to life,whether you&#39;re brainstorming, creating a prototype, translating designs into code, or iterating with AI. From idea to product, Figma empowers teams to streamline workflows, move faster, and work together in real time from anywhere in the world.</p>
<p>As a Software Engineer on our Infrastructure team, you’ll help design, build, and operate the systems that power our real-time collaborative design tools used by millions of people worldwide. We’re scaling fast, and we’re looking for experienced distributed systems engineers across a variety of teams. Whether you’re passionate about storage, compute orchestration, developer tooling, networking, or real-time data systems, this role offers an opportunity to shape the technical foundation of one of the most beloved design platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain scalable and reliable infrastructure systems that support product innovation and user collaboration at scale.</li>
</ul>
<ul>
<li>Architect and evolve distributed systems including storage platforms, streaming infrastructure, and compute orchestration.</li>
</ul>
<ul>
<li>Improve developer experience by building internal platforms, CI/CD systems, build tools, and APIs.</li>
</ul>
<ul>
<li>Collaborate across product and infrastructure teams to design secure, maintainable, and performant systems.</li>
</ul>
<ul>
<li>Participate in shaping platform strategy, roadmaps, and engineering best practices across the organization.</li>
</ul>
<ul>
<li>Debug and resolve complex production issues that span services and layers of the stack.</li>
</ul>
<ul>
<li>Mentor engineers and foster a culture of collaboration, inclusivity, and technical excellence.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of Software Engineering experience, specifically in backend or infrastructure engineering.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems concepts such as sharding, replication, consistency, and eventual convergence.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (AWS, GCP, or Azure), infrastructure-as-code, and container orchestration.</li>
</ul>
<ul>
<li>Proficiency in languages such as Go, TypeScript, Python, Rust, or Ruby.</li>
</ul>
<ul>
<li>Strong system design skills and a track record of architecting resilient production systems.</li>
</ul>
<ul>
<li>Excellent communication skills, with experience collaborating across teams and mentoring others.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience scaling storage platforms (e.g., Postgres, Redis, S3, DynamoDB) or operating streaming systems like Kafka.</li>
</ul>
<ul>
<li>Background in traffic management, DDoS mitigation, or service mesh technologies (e.g., Envoy, Istio).</li>
</ul>
<ul>
<li>A history of developing complex, real-time distributed systems at scale.</li>
</ul>
<ul>
<li>A passion for building developer productivity tools, including development environments, CI/CD pipelines, and build systems.</li>
</ul>
<ul>
<li>Experience with evolving large-scale, shared developer platforms to improve reliability and developer velocity.</li>
</ul>
<ul>
<li>Strong problem-solving skills and a bias for action,especially when tackling high-impact, gritty challenges.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$153,000-$376,000 USD</Salaryrange>
      <Skills>distributed systems, cloud-native environments, infrastructure-as-code, container orchestration, Go, TypeScript, Python, Rust, Ruby, system design, resilient production systems, storage platforms, streaming infrastructure, compute orchestration, developer tooling, networking, real-time data systems, traffic management, DDoS mitigation, service mesh technologies, complex distributed systems, developer productivity tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design platform that helps teams bring ideas to life through real-time collaboration.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5552549004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4c199de2-f4c</externalid>
      <Title>Staff Backend Engineer, Core Entities Data Foundation</Title>
      <Description><![CDATA[<p>Job Title: Staff Backend Engineer, Core Entities Data Foundation</p>
<p>Location: Remote - US</p>
<p>Department: Software Engineering</p>
<p>Job Description:</p>
<p>Airbnb was born in 2007 when two hosts welcomed three guests to their San Francisco home, and has since grown to over 5 million hosts who have welcomed over 2 billion guest arrivals in almost every country across the globe.</p>
<p>Every day, hosts offer unique stays and experiences that make it possible for guests to connect with communities in a more authentic way.</p>
<p>The Community You Will Join:</p>
<p>Marketplaces Data and AI is a group of passionate machine learning, software, data, and analytics engineers. We are responsible for developing new, cutting-edge AI and data products that leverage Airbnb’s massive datasets across Users, Listings, Pricing, and Supply/Demand.</p>
<p>You will be a crucial part of the Guest and Host organization, developing a semantic model for our Core Entities and Events that powers the experiences of millions of guests and hosts globally.</p>
<p>The Difference You Will Make:</p>
<p>You will own some of the most critical data systems at Airbnb, building a semantic model that autonomously drives both infrastructure and code changes. This role will focus on building new capabilities in our data ecosystem: a platform that proactively detects issues, orchestrates solutions and seamlessly integrates human-in-the-loop workflows for expert guidance.</p>
<p>Your contributions will shift Airbnb from storing massive amounts of data to intelligently organizing and utilizing it, empowering our product and operations teams to move faster and deliver a resilient, high quality experience for our global community of Guests and Hosts.</p>
<p>A Typical Day:</p>
<ul>
<li>Develop an actionable technical strategy from our ambitious vision to drive infrastructure and code from a semantic model of the business</li>
</ul>
<ul>
<li>Improve and expand our detection systems to find more complex issues and integrate human-in-the-loop workflows for subject matter expert guidance</li>
</ul>
<ul>
<li>Architect and develop systems that autonomously orchestrate infrastructure and code changes based on findings from our detection platforms</li>
</ul>
<ul>
<li>Partner closely with Machine Learning, Data Engineering, and Product teams to ensure deep integration in the product space, not just a siloed project.</li>
</ul>
<ul>
<li>Identify areas for improvement, champion the adoption of best practices in engineering architecture, perform technical design reviews, and enhance our software engineers across team boundaries.</li>
</ul>
<ul>
<li>Research the latest innovations in semantic modeling and AI-driven infrastructure, actively sharing these insights to act as a thought leader within Airbnb’s engineering organization.</li>
</ul>
<p>Your Expertise:</p>
<ul>
<li>9+ years of relevant software development industry experience in a fast-paced tech environment</li>
</ul>
<ul>
<li>BS, MS or PhD in CS or related field</li>
</ul>
<ul>
<li>Expertise with backend systems in large-scale service-oriented architectures</li>
</ul>
<ul>
<li>Good judgment in making tradeoffs to balance short-term business needs with long-term technical quality</li>
</ul>
<ul>
<li>Strong understanding of how deep backend systems are expressed in the UX shown to customers</li>
</ul>
<ul>
<li>End-to-end mentality that transcends team boundaries and helps find globally optimal solutions</li>
</ul>
<ul>
<li>Excellent communication skills and the ability to work well within a team and with teams across the engineering organization</li>
</ul>
<ul>
<li>Passionate about efficiency, availability, system quality and user experience</li>
</ul>
<p>Your Location:</p>
<p>This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager. While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity. Click here for the up-to-date list of excluded states. This list is continuously evolving, so please check back with us if the state you live in is on the exclusion list . If your position is employed by another Airbnb entity, your recruiter will inform you what states you are eligible to work from.</p>
<p>Our Commitment To Inclusion &amp; Belonging:</p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply. We strive to also provide a disability inclusive application and interview process. If you are a candidate with a disability and require reasonable accommodation in order to submit an application, please contact us at: reasonableaccommodations@airbnb.com. Please include your full name, the role you’re applying for and the accommodation necessary to assist you with the recruiting process. We ask that you only reach out to us if you are a candidate whose disability prevents you from being able to complete our online application.</p>
<p>How We&#39;ll Take Care of You:</p>
<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Pay Range $212,000-$265,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD</Salaryrange>
      <Skills>backend systems, large-scale service-oriented architectures, semantic modeling, AI-driven infrastructure, data systems, infrastructure and code changes, human-in-the-loop workflows, expert guidance, data ecosystem, detecting issues, orchestrating solutions, technical design reviews, engineering architecture, software engineers, team boundaries, globally optimal solutions, communication skills, team collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7774153</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b29d013-412</externalid>
      <Title>Senior Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. As a senior software engineer on the Runtime team, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>
<p>Some example projects include: Apache Spark: Develop the de facto open source standard framework for big data. Data Plane Storage: Provide reliable and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming. Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines. Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning free, scalable, and robust.</p>
<p>We look for: BS (or higher) in Computer Science, related technical field or equivalent practical experience. Comfortable working towards a multi-year vision with incremental deliverables. Motivated by delivering customer value and impact. 5+ years of production level experience in either Java, Scala or C++. Strong foundation in algorithms and data structures and their real-world use cases. Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Local Pay Range $166,000-$225,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/4513122002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a78c8753-f89</externalid>
      <Title>Staff Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>
<p>We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>
<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>
<p>Below are some example projects:</p>
<ul>
<li>Apache Spark: Develop the de facto open source standard framework for big data.</li>
<li>Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>
<li>Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming.</li>
<li>Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines.</li>
<li>Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>BS in Computer Science, related technical field or equivalent practical experience.</li>
<li>Optional: MS or PhD in databases, distributed systems.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Driven by delivering customer value and impact.</li>
<li>8+ years of production-level experience in either Java, Scala, or C++.</li>
<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>
<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform, serving thousands of organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544364002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a17bc01-d7d</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>DBT Labs is seeking a Staff Software Engineer to join our Engineering team. As a seasoned engineer, you will architect and build the durable memory substrate that powers agentic analytics workflows. This platform stores not just metadata, but meaning: decisions, intent, rationale, and history , and makes it safely accessible to humans, agents, and applications.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Prototyping apt technical solutions and finding best fits for the context engine.</li>
<li>Architecting and building the core Context Platform.</li>
<li>Designing schemas and primitives for Decision Memory and enterprise context.</li>
<li>Owning context storage systems (graph, vector, event/time-based).</li>
<li>Building read/write/query APIs used by agents, products, and external apps.</li>
<li>Designing permission-aware, auditable context access.</li>
</ul>
<p>You will be working closely with agentic systems engineers and product leadership to ensure the context engine is interoperable, portable, and zero-lock-in by design.</p>
<p>In this role, you will own:</p>
<ul>
<li>Context schemas and schema evolution strategies.</li>
<li>Storage and data modeling choices.</li>
<li>Platform APIs and interfaces.</li>
<li>Security, identity propagation, and audit foundations.</li>
<li>Long-term scalability and correctness of context data.</li>
</ul>
<p>You will not own:</p>
<ul>
<li>Agent behavior or orchestration logic.</li>
<li>Business rules or governance policy decisions.</li>
<li>Product UI or workflow automation.</li>
</ul>
<p>The ideal candidate will have significant experience building distributed systems, data platforms, or infrastructure, and will be comfortable operating in ambiguous, greenfield problem spaces. They will also have deep expertise in data modeling and schema design, experience designing shared platforms used by many teams, and strong instincts around APIs, contracts, and backward compatibility.</p>
<p>Nice to have experience with knowledge graphs, metadata systems, or search/retrieval systems, experience building systems with governance, auditability, or compliance requirements, and familiarity with dbt or modern analytics stacks or developer tooling.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems, Data platforms, Infrastructure, Data modeling, Schema design, APIs, Contracts, Backward compatibility, Knowledge graphs, Metadata systems, Search/retrieval systems, dbt, Modern analytics stacks, Developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4661362005</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>601c2dc5-462</externalid>
      <Title>Senior Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. We are a customer-obsessed company that leaps at every opportunity to solve technical challenges.</p>
<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>
<p>Some example projects include:</p>
<ul>
<li>Developing the de facto open source standard framework for big data, Apache Spark.</li>
<li>Providing reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>
<li>Building the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>
</ul>
<p>We look for candidates with a strong foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems, and a BS (or higher) in Computer Science or a related technical field.</p>
<p>The pay range for this role is $166,000-$225,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Hadoop, Distributed systems, Databases, Big data systems, Algorithms, Data structures, Real-world use cases, Cloud storage backends, Query optimizer, Execution engine</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544325002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a966b1bf-e76</externalid>
      <Title>Staff Software Engineer, Compute Infrastructure</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer, you will shape the backbone of our GPU-driven data centers,powering some of the most advanced workloads in AI and large-scale computing. This isn&#39;t just about keeping the lights on; it&#39;s about architecting the next generation of reliable, secure, and massively scalable infrastructure.</p>
<p>The METALDEV team builds and operates a suite of Go-based services that power large-scale datacenter deployments. These platforms automate complex workflows while providing deep observability and monitoring for tens of thousands of GPU servers and diverse infrastructure components,including CDUs, PDUs, and NVLink switches. Our tooling is designed for next-generation rack systems like NVIDIA GB200 and GB300, as well as a broad range of GPU server platforms.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Providing technical leadership in designing, architecting, and operating large-scale infrastructure services for GPU servers, with a focus on security, reliability, and scalability.</li>
<li>Building and enhancing infrastructure services and automation, including inventory management systems and lifecycle management solutions using open source technologies.</li>
<li>Driving strategic direction for infrastructure automation, lifecycle management, and service orchestration, making MetalDev core services more scalable and resilient.</li>
<li>Defining best practices for API development (REST/gRPC), distributed databases, and Kubernetes orchestration,while mentoring engineers to follow your lead.</li>
<li>Partnering with hardware, software, and operations teams to align infrastructure with business impact.</li>
<li>Contributing to open source communities (e.g., Go, Redfish) through collaboration and technical thought leadership.</li>
<li>Leading and improving CI/CD pipelines for hardware compliance, firmware management, and data systems.</li>
<li>Championing reliability and operational excellence by driving observability (Prometheus/Grafana), production incident response, and continuous service improvement.</li>
</ul>
<p>We&#39;re looking for someone with a strong background in software engineering, particularly in infrastructure, cloud engineering, and distributed databases. You should have experience with Go and a proven track record of building REST/gRPC APIs for mission-critical platforms. Additionally, you should be familiar with architecting and scaling cloud-native Kubernetes infrastructure and distributed services.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, REST/gRPC, Distributed databases, Kubernetes orchestration, API development, Infrastructure services, Automation, Inventory management, Lifecycle management, CI/CD pipelines, Hardware compliance, Firmware management, Data systems, Observability, Production incident response, Continuous service improvement, Kafka, ClickHouse, CRDB, DMTF, RedFish APIs, GPU servers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4603505006</Applyto>
      <Location>Manhattan, NY / Sunnyvale, CA / Bellevue, WA / Livingston, NJ</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d1a7c541-3a1</externalid>
      <Title>Senior Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our team in Belgrade. As a founding member of our Belgrade site, you will be involved in the entire development cycle and exemplify all core Databricks values. Your responsibilities will include driving requirements clarity and design decisions for ambiguous problems, producing technical design documents and project plans, developing new features, mentoring more junior engineers, testing and rolling out to production, and monitoring.</p>
<p>To be successful in this role, you will need a BS in Computer Science or equivalent practical experience in databases or distributed systems, comfort working towards a multi-year vision with incremental deliverables, motivation by delivering customer value and impact, and 5+ years of production-level experience in either Java, Scala, or C++. You should also have a solid foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop), and a strong understanding of software engineering principles and practices.</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our commitment to diversity and inclusion is a key part of our culture, and we take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8012800002</Applyto>
      <Location>Belgrade, Serbia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58be0f7c-829</externalid>
      <Title>GTM Engineer</Title>
      <Description><![CDATA[<p>We are hiring a GTM Engineer to help companies turn conversations into their competitive advantage. Our platform combines AI and human intelligence to help contact centers discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster.</p>
<p>As a GTM Engineer, you will work directly with our CEO, CRO, CMO, and Rev Ops leadership to design and deploy an AI native operating system for our revenue engine. Your responsibilities will include:</p>
<ul>
<li>Helping GTM teams prioritize the right accounts using AI and data</li>
<li>Eliminating manual work by replacing research, opportunity prep, and CRM entry with automation and AI</li>
<li>Automating deal execution by automating quoting, approvals, and document generation using AI agents</li>
<li>Owning and optimizing the GTM system by writing custom code and integrating with our GTM stack</li>
</ul>
<p>We are looking for an engineer who can bridge deep technical ability with real GTM understanding, has an entrepreneur mindset with strong ownership and bias to build, and is comfortable writing code, using APIs, automation, data systems, and AI tools.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000 - $230,000</Salaryrange>
      <Skills>AI, Automation, APIs, Data Systems, Cloud Computing, Salesforce, Gong, Clay, Centralize, Slack, Glean, Coda</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that specializes in contact center AI and revenue growth.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5123589008</Applyto>
      <Location>United States, Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94654b9f-b0f</externalid>
      <Title>Quality Control Supervisor</Title>
      <Description><![CDATA[<p>As a Quality Control Supervisor at Anduril Industries, you will be instrumental in supporting the Quality Inspection Manager by leading and developing a team of quality control inspectors whose performance meets and exceeds the execution requirements. Your focus will be on the tactical execution of inspection processes, including receiving and in-process inspections, to ensure the delivery of safe, high-quality, and reliable products. This role requires a strong understanding of quality control standards, geometric dimensioning and tolerancing (GD&amp;T), calibrated measurement equipment, data systems utilisation, and advanced product quality planning (APQP).</p>
<p>Lead and mentor a team of quality inspectors, ensuring the consistent delivery of quality performance for various product lines. Train team members on work instructions, procedures, and hands-on inspection techniques, maintaining accurate training records. Develop and maintain quality control standards, ensuring their consistent application through effective staff training. Author and update standard operating procedures (SOPs) related to inspection and quality control processes. Drive through ambiguities and uncertainties in inspection methods, inspection order execution, and planning to provide inspectors with tactical guidance and direction for execution. Collect and statistically analyse data to inform decision-making and identify areas for improvement. Manage and align resources to support various inspection operations, including Incoming/Receiving Material Inspection, In-process inspection, End-of-line Inspection, material containment/ad-hoc inspections. Continuously monitor processes to identify opportunities for enhancement and efficiency gains. Collaborate with cross-functional leaders and departments to ensure timely product delivery while maintaining quality standards. Ensure all inspection areas adhere to established processes, best practices, and utilise appropriate work instructions, procedures, and tools. Partner with Quality Engineering to contribute to Quality roadmap initiatives. Provide readiness for internal auditing and compliance reviews.</p>
<p>Required Qualifications:</p>
<ul>
<li>5+ years of progressive Quality and/or Regulatory experience within the Aerospace, Defence, or Automotive industries.</li>
<li>3+ years of experience in a Management or supervisory role.</li>
<li>Strong understanding of Quality Management Systems &amp; AS9100 requirements.</li>
<li>Prior exposure to the manufacture of electronic components for Aerospace applications.</li>
<li>Strong attention to detail, observation, organisational, and leadership skills.</li>
<li>In-depth knowledge of Quality Improvement and Lean Manufacturing methodologies.</li>
<li>Excellent organisational, problem-solving, and analytical skills.</li>
<li>Excellent communication and interpersonal skills.</li>
<li>Strong leadership and decision-making abilities in a production environment.</li>
<li>U.S. Person Status is required as this position needs to access controlled data.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Technical degree in a related quality/mechanical/aerospace engineering field.</li>
<li>ASQ Certified Quality Engineer, Quality Improvement Associate, or equivalent.</li>
<li>IPC-A-610 or IPC-A-620 Certification.</li>
<li>Knowledge of 6 Sigma, TQM, TPS, Lean, and Low Rate Production methodologies.</li>
<li>Experience with inspection of Electrical and Electronic Components (Power Supplies, PCBs, wire-harnesses).</li>
<li>Experience with data systems, QMS software, automated data collection tools, and generating reports, data visualisation, and data queries.</li>
<li>Experience in drafting and revising standard operating procedures and quality troubleshooting guides for inspection and quality control processes.</li>
<li>Experience with Manufacturing processes and their related systems (Jira, MES, ERP, Teamcenter, CAD).</li>
<li>Experience with auditing processes and procedures in a manufacturing environment.</li>
<li>Experience with validation and testing sub-assemblies and components.</li>
<li>Experience with selecting, buying, qualifying, and implementing inspection equipment.</li>
<li>Experience working with suppliers along with Sourcing and Supplier Quality Engineering teams.</li>
<li>Detail-oriented self-starter with the ability to work with minimal oversight and communicate effectively with cross-functional teams.</li>
<li>Proficiency with computer-aided design (CAD) software and inspection data management software.</li>
</ul>
<p>Travel:</p>
<ul>
<li>50% travel required to local manufacturing sites and HQ in Orange County</li>
<li>5% travel to other Anduril sites out of state.</li>
</ul>
<p>Physical Demands and Working Conditions:</p>
<ul>
<li>Ability to lift up to 35 pounds</li>
<li>Work in clean room environments</li>
<li>Use of precision optical tools and equipment</li>
<li>Visual acuity and colour discrimination required for inspection</li>
<li>Adherence to laser safety and contamination control protocols</li>
</ul>
<p>Salary Range: $86,000-$114,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$86,000-$114,000 USD</Salaryrange>
      <Skills>Quality Management Systems, AS9100, Geometric Dimensioning and Tolerancing, Calibrated Measurement Equipment, Data Systems Utilisation, Advanced Product Quality Planning, Lean Manufacturing Methodologies, Quality Improvement, Leadership, Communication, Interpersonal Skills, 6 Sigma, TQM, TPS, Lean, Low Rate Production, Data Systems, QMS Software, Automated Data Collection Tools, CAD Software, Inspection Data Management Software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5093671007</Applyto>
      <Location>Santa Ana, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e56cfd84-be6</externalid>
      <Title>Delivery Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. We are seeking a Delivery Solutions Architect to play an important role in this journey.</p>
<p>As a Delivery Solutions Architect, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations</li>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from ‘win’ to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision)</li>
<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimization)</li>
<li>Executive and operational governance</li>
<li>Provide internal and external updates</li>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
<li>Experience resolving complex and important escalation with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $180,000-$247,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Python, SQL, Scala, Solution architecture, Distributed data systems, Business value and outcomes, Technical program management, Project management, Account management, Stakeholder management, Resource management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform for unifying and democratizing data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8442976002</Applyto>
      <Location>Georgia; Illinois; Massachusetts; New York; North Carolina; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>46628f21-1ce</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey.</p>
<p>You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations</li>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from ‘win’ to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision)</li>
<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimization)</li>
<li>Executive and operational governance</li>
<li>Provide internal and external updates</li>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
<li>Experience resolving complex and important escalation with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Distributed data systems, Solution architecture, Technical project management, Customer success, Pre-sales, Technical architecture, Consulting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8476496002</Applyto>
      <Location>Brisbane, Australia; Melbourne, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>688649c6-0a0</externalid>
      <Title>Delivery Solutions Architect - Digital Native Business</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems. As a Delivery Solutions Architect (DSA), you will play an important role in this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organisations</li>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from &#39;win&#39; to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>
<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimisation)</li>
<li>Executive and operational governance</li>
<li>Provide internal and external updates</li>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
<li>Experience resolving complex and important escalation with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $180,000-$247,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Python, SQL, Scala, Solution architecture, Distributed data systems, Project management, Account management, Stakeholder management, Resource management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8385234002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1e2803b1-820</externalid>
      <Title>Delivery Solutions Architect - Communications, Media, Entertainment and Games</Title>
      <Description><![CDATA[<p>We are seeking a Delivery Solutions Architect to join our team. As a Delivery Solutions Architect, you will play a key role in accelerating the adoption and growth of the Databricks platform in our customers. You will collaborate with our sales and field engineering teams to ensure customer success by increasing focus and technical accountability to our most complex customers. You will be responsible for leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts. You will be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders. You will create, own, and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services resources on the delivery of PS Engagement proposals. You will navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs. You will develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover main use cases moving from &#39;win&#39; to production, enablement/user growth plan, product adoption, organic needs for current investment, executive and operational governance, and KPI reporting on the status of usage and customer health.</p>
<p>To be successful in this role, you will need to have 5+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI. You will need to have programming experience in Python, SQL, or Scala. You will need to have experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role. You will need to have understanding of solution architecture related distributed data systems. You will need to have understanding of how to attribute business value and outcomes to specific project deliverables. You will need to have technical program or project management including account, stakeholder, and resource management accountability. You will need to have experience resolving complex and important escalation with senior customer executives. You will need to have experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis, and managing delivery of complex programs/projects. You will need to have a track record of overachievement against quota, goals, or similar objective targets. You will need to have a Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$219,100-$301,300 USD</Salaryrange>
      <Skills>Python, SQL, Scala, Solution architecture, Distributed data systems, Technical program management, Project management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a company that provides a data and AI platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8457249002</Applyto>
      <Location>Remote - California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>30648b64-012</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilising the Intelligence platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey.</p>
<p>You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. As a DSA, you will help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>
<p>This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation. You will report directly to a Field Engineering Director in Japan.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritised customers.</li>
<li>Lead the Post-Technical Win technical account strategy and execution plan for the majority of Databricks Use Cases within our most strategic accounts.</li>
<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>
<li>You will be the first contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organisations.</li>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise.</li>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services resources on the delivery of PS Engagement proposals.</li>
<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs.</li>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from ‘win’ to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision)</li>
<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimisation</li>
<li>Executive and operational governance</li>
<li>Provide internal and external updates</li>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression - to your Technical GM.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>8+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with our customers.</li>
<li>Programming experience in Python, SQL or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>A understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
<li>Experience resolving complex and important escalation with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Solution architecture, Distributed data systems, Project management, Technical program management, Customer success, Pre-sales, Technical architecture, Consulting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8450551002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0c456364-565</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>As a Delivery Solutions Architect at Databricks, you will be a trusted technical advisor embedded within the customer organisation. You will work closely with sales and field engineering to accelerate adoption and growth of the Databricks platformقت You will ensure customer success by providing technical accountability for our most complex customers,helping them maximise the value of Databricks workloads they have already selected and improving their return on investment.</p>
<p>This role blends deep technical leadership with strategic customer engagement. You will own the post-sales technical strategy for the customer’s highest-value use cases and serve as their primary advisor across the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Being the accountable Databricks Architect for your assigned customers, working with technical teams to guide priority use cases from design through go-live,removing blockers, providing best practices, and ensuring stable, scalable adoption.</li>
<li>Leading the post-technical-win strategy and execution plan for major Databricks use cases, aligning with Solutions Architects to understand full demand plans and drive clarity across multiple selling teams and stakeholders.</li>
<li>Owning the technical leadership of assigned use cases, creating certainty from ambiguity and coordinating onboarding, enablement, success, go-live, and healthy consumption of workloads selected for Databricks.</li>
<li>Serving as the first point of contact for production/go-live status, often across multiple complex use cases within large enterprise organisations.</li>
<li>Orchestrating the broader Databricks ecosystem,Shared Services, User Education, Onboarding/Technical Services, Support, and specialist technical teams,to ensure high-quality delivery and escalate advanced issues when needed.</li>
<li>Creating and executing a point of view for accelerating use cases into production, collaborating with Professional Services on proposals as needed.</li>
<li>Partnering with Product and Engineering to introduce new capabilities, private previews, and upgrade paths that support customer roadmaps.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Programming experience in Python, SQL, or Scala, and a solid understanding of distributed data systems.</li>
<li>5+ years of experience delivering Data, Analytics, or AI projects, with the ability to contribute to architectural discussions with customers.</li>
<li>Experience in customer-facing technical roles such as technical architecture, pre-sales, consulting, or customer success.</li>
<li>Ability to guide architectural decisions in domains such as data engineering, data architecture, data warehousing, or data science.</li>
<li>Demonstrated ability to drive delivery outcomes without hands-on keyboard responsibilities.</li>
<li>Experience resolving complex escalations with senior customer stakeholders.</li>
<li>Understanding of how to connect technical deliverables to business value.</li>
<li>Track record of achieving or exceeding goals or objectives.</li>
<li>Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent experience.</li>
<li>Fluency in English is required; French or German language skills are a plus.</li>
<li>Ability to travel up to 30%.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Distributed data systems, Data engineering, Data architecture, Data warehousing, Data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8309177002</Applyto>
      <Location>Zürich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5834e3ad-7b2</externalid>
      <Title>Senior Site Reliability Engineer - Security and Data Systems (Federal)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Senior Site Reliability Engineer (SRE) - Security and Data Systems</strong></p>
<p>Our company is seeking a highly skilled Senior Site Reliability Engineer to join our team. We are a SaaS company specializing in securing large-scale systems. This role is a blend of software engineering and systems administration, where you&#39;ll be responsible for building and maintaining highly reliable, scalable, and secure infrastructure. You will be a key contributor, applying your expertise to automate manual processes and proactively solve complex problems before they become incidents, handling incidents, and includes on-call shifts.</p>
<p>*This position requires the ability to access U.S. National Security information. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Platform &amp; Reliability: Design, build, and maintain the core infrastructure that underpins our security SaaS offerings, ensuring high availability, performance, and scalability. This includes building and operating the tooling for our Snowflake data systems.</li>
</ul>
<ul>
<li>Automation: Develop robust automation using code to eliminate toil and ensure consistency across our environments. You&#39;ll be a key driver in automating everything from infrastructure provisioning to application deployment and incident response.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Work closely with our security teams to embed a security-first mindset into all our processes and infrastructure. You will be responsible for ensuring our systems and data platforms are compliant with industry standards.</li>
</ul>
<ul>
<li>Incident Response: Participate in on-call rotations and be a primary responder for critical incidents, leading root cause analysis and implementing preventative measures to ensure issues don&#39;t recur.</li>
</ul>
<ul>
<li>Collaboration: Partner with development, data science, and security teams to provide expert guidance on architectural decisions, best practices, and the implementation of new services.</li>
</ul>
<p><strong>Key Skills &amp; Qualifications</strong></p>
<ul>
<li>U.S. Person Status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee)</li>
</ul>
<ul>
<li>Strong Coding Skills: You are a developer at heart and are comfortable writing production-level code to solve complex operational challenges.</li>
</ul>
<ul>
<li>Infrastructure as Code (IaC): Deep experience with Terraform for provisioning and managing cloud infrastructure and services.</li>
</ul>
<ul>
<li>Continuous Delivery: Familiarity with modern CI/CD practices and tools, particularly Spinnaker, to automate and standardize our release pipelines.</li>
</ul>
<ul>
<li>Containerization &amp; Orchestration: Expertise in container technologies and hands-on experience managing large-scale, production-ready clusters with Kubernetes.</li>
</ul>
<ul>
<li>Database Migrations: Experience with database schema management tools like Flyway for safely and reliably handling database changes.</li>
</ul>
<ul>
<li>Data Systems: Direct experience with large-scale data systems, specifically with the Snowflake platform.</li>
</ul>
<ul>
<li>AI/ML Experience (a plus): Experience or a strong interest in AI/ML, particularly how these technologies can be applied to improve reliability, security, and operational efficiency (e.g., AIOps, predictive analysis).</li>
</ul>
<ul>
<li>Problem-Solving: Excellent analytical and problem-solving skills with a proactive approach to identifying and addressing potential issues.</li>
</ul>
<p>This role requires in-person onboarding and travel to our San Francisco Office during the first week of employment.</p>
<p>#LI-Hybrid #LI-TM</p>
<p>(P18058_3355591)</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$147,000-$202,400 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$147,000-$202,400 USD</Salaryrange>
      <Skills>U.S. Person Status, Strong Coding Skills, Infrastructure as Code (IaC), Continuous Delivery, Containerization &amp; Orchestration, Database Migrations, Data Systems, AI/ML Experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a SaaS company specializing in securing large-scale systems.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7591606</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c94fae85-14c</externalid>
      <Title>Delivery Solutions Architect - Digital Native Business</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform.</p>
<p>As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in Digital Native customers.</p>
<p>You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon.</p>
<p>This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation.</p>
<p>You will report directly to a DSA Manager within the Field Engineering organization.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with Solutions Architects to understand the full use case demand plan for prioritized customers</li>
</ul>
<ul>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
</ul>
<ul>
<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
</ul>
<ul>
<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organisations</li>
</ul>
<ul>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
</ul>
<ul>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
</ul>
<ul>
<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>
</ul>
<ul>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from &#39;win&#39; to production</li>
</ul>
<ul>
<li>Enablement / user growth plan</li>
</ul>
<ul>
<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>
</ul>
<ul>
<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimisation)</li>
</ul>
<ul>
<li>Executive and operational governance</li>
</ul>
<ul>
<li>Provide internal and external updates</li>
</ul>
<ul>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
</ul>
<ul>
<li>Programming experience in Python, SQL or Scala</li>
</ul>
<ul>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
</ul>
<ul>
<li>Understanding of solution architecture related distributed data systems</li>
</ul>
<ul>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
</ul>
<ul>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
</ul>
<ul>
<li>Experience resolving complex and important escalation with senior customer executives</li>
</ul>
<ul>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
</ul>
<ul>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
</ul>
<p>Can travel up to 30% when needed</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $180,000-$247,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Python, SQL, Scala, Solution architecture, Distributed data systems, Technical project management, Customer success, Pre-sales, Technical architecture, Consulting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8385230002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dcccc99d-20f</externalid>
      <Title>Delivery Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Delivery Solutions Architect, you will play a key role in accelerating the adoption and growth of the Databricks Platform in public sector customers. You will collaborate with sales and field engineering teams to drive growth in assigned customers and use cases. This is a hybrid technical and commercial role that requires you to utilize your skills and technical credibility to engage and communicate effectively with all levels of an organization.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Engaging with solutions architects to understand full use case demand plans for prioritized customers</li>
<li>Leading post-technical win technical account strategy and execution plans for Databricks use cases within strategic accounts</li>
<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders</li>
<li>Creating, owning, and executing a point-of-view on how key use cases can be accelerated into production</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>U.S. citizenship</li>
<li>7+ years of experience in technical project/program delivery within the domain of data and AI</li>
<li>Programming experience in Python, SQL, or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related to distributed data systems</li>
</ul>
<p>Pay range transparency: $180,000-$247,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8289852002</Applyto>
      <Location>New Jersey; Remote - New York; Remote - Pennsylvania; Remote - Washington D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0abf66ee-ccb</externalid>
      <Title>Delivery Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organisations</li>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from &#39;win&#39; to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>
<li>Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimisation)</li>
<li>Executive and operational governance</li>
<li>Provide internal and external updates</li>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
<li>Experience resolving complex and important escalation with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Business value attribution, Project management, Customer success, Technical architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8233904002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>12b7c011-a90</externalid>
      <Title>Staff Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>
<p>We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>
<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialised SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>
<p>Below are some example projects:</p>
<ul>
<li>Apache Spark: Develop the de facto open source standard framework for big data.</li>
<li>Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>
<li>Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming.</li>
<li>Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines.</li>
<li>Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>BS in Computer Science, related technical field or equivalent practical experience.</li>
<li>Optional: MS or PhD in databases, distributed systems.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Driven by delivering customer value and impact.</li>
<li>8+ years of production-level experience in either Java, Scala, or C++.</li>
<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>
<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a global organisation that builds and runs the world&apos;s best data and AI infrastructure platform, serving thousands of organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5646855002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d91bec16-126</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey.</p>
<p>You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get of our platform and the return on investment.</p>
<p>This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with Solutions Architects to understand the full use case demand plan for prioritised customers</li>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations</li>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs</li>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<p>Main use cases moving from ‘win’ to production Enablement / user growth plan Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision) Organic needs for current investment (e.g. cloud cost control, tuning &amp; optimization) Executive and operational governance Provide internal and external updates KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression to your Technical GM</p>
<p>What we look for:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project / program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program, or project management including account, stakeholder and resource management accountability</li>
<li>Experience resolving complex and important escalation with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing delivery of complex programmes/projects</li>
<li>Track record of overachievement against quota, Goals or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Technical project management, Customer success, Consulting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8285292002</Applyto>
      <Location>Auckland, New Zealand</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4eeea81a-54e</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>As a Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary responsibility will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>
<p>This is a hybrid technical and commercial role that requires you to drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be responsible for becoming the post-sale technical lead across all Databricks products and using your skills and technical credibility to engage and communicate at all levels with an organization.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritized customers</li>
<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live, and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Being the first contact for any technical issues or questions related to production/go live status of agreed-upon use cases within an account, oftentimes servicing multiple use cases within the largest and most complex organizations</li>
<li>Leveraging both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>
<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from &#39;win&#39; to production</li>
<li>Enablement/user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>
<li>Organic needs for current investment (e.g., cloud cost control, tuning &amp; optimization)</li>
<li>Executive and operational governance</li>
<li>Providing internal and external updates</li>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression to your Technical GM</li>
</ul>
<p>Key qualifications include:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL, or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program or project management, including account, stakeholder, and resource management accountability</li>
<li>Experience resolving complex and important escalations with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis, and managing delivery of complex programs/projects</li>
<li>Track record of overachievement against quota, goals, or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Business value attribution, Technical program management, Customer success, Technical architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on the platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8137000002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>726e518c-28f</externalid>
      <Title>デリバリーソリューションアーキテクト</Title>
      <Description><![CDATA[<p>Job Title: Delivery Solution Architect</p>
<p>We are seeking a highly skilled Delivery Solution Architect to join our team. As a Delivery Solution Architect, you will be responsible for delivering technical solutions to customers and collaborating with sales and field engineering teams to accelerate customer adoption of our platform.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Collaborate with sales and field engineering teams to deliver technical solutions to customers.</li>
<li>Provide technical guidance and support to customers to ensure they get the maximum value and ROI from our platform.</li>
<li>Work closely with customers to understand their business requirements and develop tailored solutions to meet their needs.</li>
<li>Develop and maintain relationships with key stakeholders, including customers, partners, and internal teams.</li>
<li>Collaborate with cross-functional teams to identify and prioritize customer needs and develop solutions to address them.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of experience in delivering technical projects or programs in the data and AI space.</li>
<li>Strong understanding of distributed data systems and solution architecture.</li>
<li>Experience working with customers to deliver technical solutions and providing technical guidance and support.</li>
<li>Strong communication and interpersonal skills, with the ability to work effectively with customers, partners, and internal teams.</li>
<li>Experience working in a fast-paced environment and prioritizing multiple tasks and deadlines.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>
<li>Opportunity to work with a leading-edge technology company and contribute to the development of innovative solutions.</li>
<li>Collaborative and dynamic work environment with a team of experienced professionals.</li>
<li>Professional development opportunities, including training and education programs.</li>
</ul>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Distributed data systems, Solution architecture, Customer-facing technical solutions, Technical guidance and support, Cloud computing, Data engineering, Machine learning, Data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8428882002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c9da749d-250</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>As a Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems using the Databricks Data Intelligence Platform. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary responsibility will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>
<p>This is a hybrid technical and commercial role. You will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be the post-sale technical lead across all Databricks products, requiring you to use your skills and technical credibility to engage and communicate at all levels with an organization.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritized customers</li>
<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live, and healthy consumption of the workloads where the customer has made the decision to consume Databricks</li>
<li>Being the first contact for any technical issues or questions related to production/go live status of agreed-upon use cases within an account, often services multiple use cases within the largest and most complex organizations</li>
<li>Leveraging both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical experts to build the right tasks that are beyond your scope of activities or expertise</li>
<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>
<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<p>Main use cases moving from &#39;win&#39; to production 	Enablement/user growth plan 	Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision) 	Organic needs for current investment (e.g., cloud cost control, tuning &amp; optimization) 	Executive and operational governance 	Provide internal and external updates 	KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression to your Technical GM</p>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers</li>
<li>Programming experience in Python, SQL, or Scala</li>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role</li>
<li>Understanding of solution architecture related distributed data systems</li>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
<li>Technical program or project management, including account, stakeholder, and resource management accountability</li>
<li>Experience resolving complex and important escalations with senior customer executives</li>
<li>Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis, and managing delivery of complex programs/projects</li>
<li>Track record of overachievement against quota, goals, or similar objective targets</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Data and AI, Solution architecture, Distributed data systems, Business value attribution, Technical program management, Customer success, Pre-sales, Technical architecture, Consulting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8465963002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f3af420-38f</externalid>
      <Title>Staff Software Engineer - Distributed Data Systems</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Runtime team. As a software engineer on this team, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>
<p>Some example projects include:</p>
<ul>
<li>Developing the de facto open source standard framework for big data, Apache Spark.</li>
<li>Providing reliable and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>
<li>Building the next generation query optimizer and execution engine that&#39;s fast, tuning free, scalable, and robust.</li>
</ul>
<p>To be successful in this role, you will need:</p>
<ul>
<li>BS (or higher) in Computer Science, related technical field or equivalent practical experience.</li>
<li>8+ years of production level experience in either Java, Scala or C++.</li>
<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>
<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>
</ul>
<p>We offer a competitive salary range of $182,400-$247,000 USD, annual performance bonus, equity, and comprehensive benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,400-$247,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6937001002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>66e1a634-e89</externalid>
      <Title>Backend Engineer, Analytics Instrumentation (Golang)</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>GitLab is the intelligent orchestration platform for DevSecOps. As an Intermediate Backend Engineer, you&#39;ll guide the design and development of backend systems that help the company identify customer usage patterns across GitLab SaaS and Self-Managed deployments. That data informs product decisions.</p>
<p>This role offers the opportunity to build foundational infrastructure that makes instrumentation simpler and more reliable for teams at GitLab.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain a unified Go-based instrumentation service that consolidates instrumentation across the entire company, eliminating the need for multiple language-specific SDKs while maintaining reliability and performance.</li>
</ul>
<ul>
<li>Manage the handling of the sending, transit, and quality of instrumentation data across the system, ensuring data integrity that directly impacts the company&#39;s key prioritization and usage billing accuracy.</li>
</ul>
<ul>
<li>Train and assist product development teams across the company on how to instrument their features using the unified service, providing documentation, guidance, and technical help.</li>
</ul>
<ul>
<li>Manage on-call duties during working hours for systems that handle usage billing and instrumentation, ensuring system reliability and quick response to critical issues.</li>
</ul>
<ul>
<li>Work across research and development teams and the enterprise data organization to identify requirements and deliver solutions that serve multiple stakeholders.</li>
</ul>
<ul>
<li>Make key architectural decisions that balance the needs of product teams (who need ease of use) with data consumers (who need reliability and correctness), ensuring the system serves as a foundational service for the company.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Proficiency in the Go programming language, with experience building and maintaining production services.</li>
</ul>
<ul>
<li>Strong backend development experience, with the ability to design scalable, reliable systems serving internal and external customers.</li>
</ul>
<ul>
<li>Experience with infrastructure concerns such as system reliability, performance at scale, data quality, and observability.</li>
</ul>
<ul>
<li>Experience designing and building APIs (REST, gRPC, or similar) that other teams integrate with.</li>
</ul>
<ul>
<li>Experience working in cross-functional teams with product teams, data consumers, and other internal stakeholders across team boundaries.</li>
</ul>
<ul>
<li>Experience with instrumentation, analytics, data systems, or similar foundational infrastructure in application environments such as Ruby on Rails or comparable stacks.</li>
</ul>
<p>About the Team:</p>
<p>Our team is part of the Data Engineering organization and runs a foundational service used by all research and development teams at GitLab. We manage the systems that send, transport, and validate instrumentation data across the company, giving us visibility into customer usage patterns across GitLab SaaS and Self-Managed deployment environments. This data informs usage billing and product planning. We&#39;re building a unified, Go-based instrumentation service that replaces multiple language-specific SDKs, making it easier for teams to instrument their features while ensuring data integrity for billing and analysis.</p>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go programming language, backend development, API design, infrastructure concerns, instrumentation, analytics, data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides a suite of tools for version control, collaboration, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481929002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>27a278ac-0b7</externalid>
      <Title>Backend Engineer, Analytics Instrumentation (Golang)</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>We are seeking an experienced Backend Engineer to join our team in India. As a Backend Engineer, you will be responsible for designing and developing backend systems that help the company identify customer usage patterns across GitLab SaaS and Self-Managed deployments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain a unified Go-based instrumentation service that consolidates instrumentation across the entire company, eliminating the need for multiple language-specific SDKs while maintaining reliability and performance.</li>
</ul>
<ul>
<li>Manage the handling of the sending, transit, and quality of instrumentation data across the system, ensuring data integrity that directly impacts the company&#39;s key prioritization and usage billing accuracy.</li>
</ul>
<ul>
<li>Train and assist product development teams across the company on how to instrument their features using the unified service, providing documentation, guidance, and technical help.</li>
</ul>
<ul>
<li>Manage on-call duties during working hours for systems that handle usage billing and instrumentation, ensuring system reliability and quick response to critical issues.</li>
</ul>
<ul>
<li>Work across research and development teams and the enterprise data organization to identify requirements and deliver solutions that serve multiple stakeholders.</li>
</ul>
<ul>
<li>Make key architectural decisions that balance the needs of product teams (who need ease of use) with data consumers (who need reliability and correctness), ensuring the system serves as a foundational service for the company.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Proficiency in the Go programming language, with experience building and maintaining production services.</li>
</ul>
<ul>
<li>Strong backend development experience, with the ability to design scalable, reliable systems serving internal and external customers.</li>
</ul>
<ul>
<li>Experience with infrastructure concerns such as system reliability, performance at scale, data quality, and observability.</li>
</ul>
<ul>
<li>Experience designing and building APIs (REST, gRPC, or similar) that other teams integrate with.</li>
</ul>
<ul>
<li>Experience working in cross-functional teams with product teams, data consumers, and other internal stakeholders across team boundaries.</li>
</ul>
<ul>
<li>Experience with instrumentation, analytics, data systems, or similar foundational infrastructure in application environments such as Ruby on Rails or comparable stacks.</li>
</ul>
<p>About the Team:</p>
<p>Our team is part of the Data Engineering organization and runs a foundational service used by all research and development teams at GitLab. We manage the systems that send, transport, and validate instrumentation data across the company, giving us visibility into customer usage patterns across GitLab SaaS and Self-Managed deployment environments. This data informs usage billing and product planning. We&#39;re building a unified, Go-based instrumentation service that replaces multiple language-specific SDKs, making it easier for teams to instrument their features while ensuring data integrity for billing and analysis.</p>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p>Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you&#39;re excited about this role, please apply and allow our recruiters to assess your application.</p>
<p>Country Hiring Guidelines: GitLab hires new team members in countries around the world. All of our roles are remote, however some roles may carry specific location-based eligibility requirements. Our Talent Acquisition team can help answer any questions about location after starting the recruiting process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go programming language, backend development, API design, instrumentation, analytics, data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides a suite of tools for version control, collaboration, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481922002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42187d42-78e</externalid>
      <Title>Staff Engineer (Backend, DevOps, Infrastructure)</Title>
      <Description><![CDATA[<p>About Zuma</p>
<p>Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property managers alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market.</p>
<p>Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We&#39;re on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers.</p>
<p>As a Staff Engineer, you will:</p>
<p>Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management. You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences.</p>
<p>Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service. We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective.</p>
<p>Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents.</p>
<p>You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you&#39;ll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come.</p>
<p>You&#39;ll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding.</p>
<p>As our first US-based engineer, you&#39;ll bridge the gap between our product vision and technical implementation. This role offers a rare opportunity to directly influence how we architect the next generation of our platform.</p>
<p>You&#39;ll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems.</p>
<p>Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base.</p>
<p>We&#39;re looking for a Staff Engineer to help us bring that future to life. This is not just another dev role. You&#39;ll be hands-on shaping the technical DNA of Zuma. You&#39;ll architect critical systems, tame legacy code, build net-new AI-powered experiences, and lay down the patterns future engineers will inherit.</p>
<p>If you&#39;re obsessed with building real products people use, especially products powered by LLMs, this might be your playground.</p>
<p><strong>Why This Could Be Your Dream Role</strong></p>
<ul>
<li>You&#39;ll work directly with cutting-edge LLM technology in a real-world application</li>
<li>You want to work at a company where customers feel your impact every day</li>
<li>You&#39;ll architect AI-powered systems that are transforming the real estate industry</li>
<li>You&#39;ll have autonomy to design and implement innovative technical solutions</li>
<li>Your work will directly impact thousands of apartment communities and millions of renters</li>
<li>You&#39;ll receive significant equity in a venture-backed company with strong traction</li>
<li>As we scale, your role and influence will grow with the company</li>
</ul>
<p><strong>Why You Might Want to Think Twice</strong></p>
<ul>
<li>This is a demanding role that will often require extended hours and deep commitment</li>
<li>As a founding team member, you&#39;ll need to wear multiple hats and step outside your comfort zone</li>
<li>You&#39;ll need to make thoughtful tradeoffs between innovation and immediate needs</li>
<li>You&#39;ll interact directly with customers to understand their needs and occasionally travel to their offices</li>
<li>We&#39;re a startup - priorities can shift rapidly as we respond to market opportunities and customer needs</li>
<li>If you&#39;re not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn&#39;t the job for you</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation</li>
<li>Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands</li>
<li>Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products</li>
<li>Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability</li>
<li>Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform</li>
<li>Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions</li>
<li>Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</li>
</ul>
<p><strong>Your Experience Looks Like</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field</li>
<li>5+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability</li>
<li>Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services</li>
<li>Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)</li>
<li>Hands-on experience with database design, performance tuning, and scaling high-throughput data systems</li>
<li>Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices</li>
<li>Strong communication skills and ability to work effectively in a distributed, fast-paced environment</li>
<li>Comfortable operating in early-stage, high-ownership environments with evolving requirements</li>
<li>Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure</li>
<li>Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</li>
</ul>
<p><strong>Guiding Principles</strong></p>
<ul>
<li>Customer‑First Outcomes</li>
</ul>
<p>Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship.</p>
<ul>
<li>Bias for Simplicity</li>
</ul>
<p>We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler.</p>
<ul>
<li>Quality Is a Gate, Not an After‑Thought</li>
</ul>
<p>Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship.</p>
<ul>
<li>Data‑Driven Choices</li>
</ul>
<p>We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet.</p>
<ul>
<li>Transparency &amp; Written Culture</li>
</ul>
<p>Good ideas don’t expire in Zoom. We operate in public i</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management.</Employerdescription>
      <Employerwebsite>https://www.zuma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/800b8d69-b1e0-4524-a0a7-a5cec8b337b5</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c20d7221-4b5</externalid>
      <Title>Support Engineer</Title>
      <Description><![CDATA[<p>As a Support Engineer at Zuma, you&#39;ll be a bridge between our customers, engineering team, and product vision. You&#39;ll ensure new customers onboard smoothly, integrations run reliably, and support operations scale as we grow. This is a hands-on role for someone who loves problem-solving, can dive into APIs and databases, and takes pride in clear documentation and communication.</p>
<p>You&#39;ll help property managers succeed with our AI platform while also driving continuous improvements in our internal tools and processes.</p>
<p>Responsibilities:
Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation
Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands
Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products
Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability
Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform
Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions
Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</p>
<p>Your Experience Looks Like:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
3+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability
Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services
Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)
Hands-on experience with database design, performance tuning, and scaling high-throughput data systems
Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices
Strong communication skills and ability to work effectively in a distributed, fast-paced environment
Comfortable operating in early-stage, high-ownership environments with evolving requirements
Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure
Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</p>
<p>Guiding Principles:
Customer‑First Outcomes
Bias for Simplicity
Quality Is a Gate, Not an After‑Thought
Data‑Driven Choices
Transparency &amp; Written Culture</p>
<p>Other Benefits:
Great health insurance, dental, and vision
Gym and workspace stipends
Computer and workspace enhancements
Unlimited PTO
Opportunity to play a critical role in building the foundations of the company and Engineering culture</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management businesses across the US and Canada, a ~$200B market.</Employerdescription>
      <Employerwebsite>https://www.zuma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/da4d2130-954e-4b29-a9ef-3926b9bedba6</Applyto>
      <Location>US and Canada</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>30009529-7ed</externalid>
      <Title>Staff Engineer, Flight Control - X-BAT</Title>
      <Description><![CDATA[<p>In this role, you will contribute to the flight control system for XBAT, with responsibility for delivering stable, predictable, and operationally robust performance across all phases of flight.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design and implement flight control laws and supporting logic across multiple flight regimes.</li>
<li>Build and mature high-fidelity 6DOF simulation environments used for control development and ensure accurate pre-flight performance predictions.</li>
<li>Execute verification activities, including linear analysis, Monte Carlo campaigns, disturbance/sensitivity/degraded operation testing, and HIL validation.</li>
<li>Diagnose and resolve simulation-to-test mismatches, including unidentified dynamics and modeling gaps.</li>
<li>Support flight test operations and contribute to rapid iteration between flights.</li>
<li>Work closely with propulsion, aero, actuation, sensors, state estimation, and autonomy teams to ensure accurate integrated system performance in the 6DOF simulation environment.</li>
<li>Mentor, teach, and up-skill younger engineers.</li>
</ul>
<p>Required qualifications include 6+ years of experience in flight controls/GNC on real flight vehicles, with demonstrated contributions from design through test.</p>
<p>Preferred qualifications include autonomous systems or missile/UAV GNC background, familiarity with sensor and air data systems, CI/CD pipelines, and modern DevSecOps practices.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $280,000 a year</Salaryrange>
      <Skills>flight controls, GNC, vehicle modeling and simulation, control law design, control allocation, guidance and autonomy integration, verification and validation, C++, MATLAB/Simulink, Python, autonomous systems, sensor and air data systems, CI/CD pipelines, modern DevSecOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for military and civilian use.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/86bbc19b-138b-4974-b27a-ad488230669e</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>98022718-408</externalid>
      <Title>Engineer II, Flight Controls - X-BAT</Title>
      <Description><![CDATA[<p>You will contribute to the flight control system for XBAT, with responsibility for delivering stable, predictable, and operationally robust performance across all phases of flight.</p>
<p>Design and implement flight control laws and supporting logic across multiple flight regimes.
Build and mature high-fidelity 6DOF simulation environments used for control development and ensure accurate pre-flight performance predictions.
Execute verification activities, including linear analysis, Monte Carlo campaigns, disturbance/sensitivity/degraded operation testing, and HIL validation.
Diagnose and resolve simulation-to-test mismatches, including unidentified dynamics and modeling gaps.
Support flight test operations and contribute to rapid iteration between flights.
Work closely with propulsion, aero, actuation, sensors, state estimation, and autonomy teams to ensure accurate integrated system performance in the 6DOF simulation environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$130,000 - $190,000 a year</Salaryrange>
      <Skills>flight controls, GNC, vehicle modeling and simulation, control law design, control allocation, guidance and autonomy integration, verification and validation, C++, MATLAB/Simulink, Python, autonomous systems, missile/UAV GNC, sensor and air data systems, CI/CD pipelines, DevSecOps practices, HIL or integrated simulation/test environments, Linux-based development workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for protecting service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/07546f7f-9a69-4adb-96d6-f94505e7523f</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>73094bc4-f8e</externalid>
      <Title>Senior Engineer, Flight Controls - X-BAT</Title>
      <Description><![CDATA[<p>In this role, you will contribute to the flight control system for XBAT, with responsibility for delivering stable, predictable, and operationally robust performance across all phases of flight.</p>
<p>Design and implement flight control laws and supporting logic across multiple flight regimes.
Build and mature high-fidelity 6DOF simulation environments used for control development and ensure accurate pre-flight performance predictions.
Execute verification activities, including linear analysis, Monte Carlo campaigns, disturbance/sensitivity/degraded operation testing, and HIL validation.
Diagnose and resolve simulation-to-test mismatches, including unidentified dynamics and modeling gaps.
Support flight test operations and contribute to rapid iteration between flights.
Work closely with propulsion, aero, actuation, sensors, state estimation, and autonomy teams to ensure accurate integrated system performance in the 6DOF simulation environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $240,000 a year</Salaryrange>
      <Skills>flight controls, GNC, vehicle modeling and simulation, control law design, control allocation, guidance and autonomy integration, verification and validation, C++, MATLAB/Simulink, Python, U.S. security clearance, autonomous systems, missile/UAV GNC, sensor and air data systems, CI/CD pipelines, DevSecOps practices, NPSS propulsion models, HIL or integrated simulation/test environments, Linux-based development workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for military and civilian use.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/e764ace5-77fb-4f03-b21e-4e9149e356a6</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1591a9fa-143</externalid>
      <Title>Manager, Software Engineering - System Update Tools</Title>
      <Description><![CDATA[<p>The Solutions function at Shield AI is tasked with developing and deploying software applications that facilitate critical &amp; advanced operational capabilities across various systems and use cases. As a Solutions Software Engineering Manager, you will lead a team responsible for the architecture, design, and development of integrated software applications that deploy software updates and optimize in-house flight operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading and supporting the team in designing and implementing reliable systems for delivering over-the-air (OTA) software updates to internal fleet &amp; customers,</li>
<li>Overseeing development of interactive software applications that simulate aircraft operations for training &amp; demo purposes,</li>
<li>Guiding the development of maintenance software and operator-facing applications that help staff track, schedule, and perform maintenance activities efficiently,</li>
<li>Supporting and reviewing enhancements to tools that collect, process, and analyze flight data, providing actionable insights that improve flight safety, efficiency, and compliance with regulatory standards,</li>
<li>Collaborating with cross-functional teams to ensure all software solutions integrate smoothly with existing systems, maintain system integrity and performance, and reduce code duplication.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Software Engineering, or a related field, or equivalent practical experience,</li>
<li>6+ years of experience in software development, working on complex or distributed systems,</li>
<li>1+ years of experience leading projects or managing engineers,</li>
<li>Strong proficiency in Python and/or C++,</li>
<li>Experience designing and building software for deployment systems, data processing, or user-facing applications,</li>
<li>Experience collaborating with cross-functional teams such as DevOps, Integration &amp; Test, or similar.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience with OTA software update systems or fleet-wide software deployment,</li>
<li>Experience developing simulation or training applications,</li>
<li>Experience building maintenance or operator-facing workflow tools,</li>
<li>Experience working with flight data or similar operational data systems,</li>
<li>Familiarity with integrating software across multiple systems and environments,</li>
<li>Experience working in high-reliability, safety-critical, or operational environments,</li>
<li>Experience working with or supporting testing, validation, or integration efforts.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $240,000 a year</Salaryrange>
      <Skills>Python, C++, Software development, Complex systems, Project management, Team leadership, DevOps, Integration &amp; Test, OTA software update systems, Fleet-wide software deployment, Simulation or training applications, Maintenance or operator-facing workflow tools, Flight data or similar operational data systems, Integrating software across multiple systems and environments, High-reliability, safety-critical, or operational environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with a mission to protect service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/bfcf87d0-60b0-4769-bc65-4a5544b43278</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0549f86b-cdd</externalid>
      <Title>Principal Software Engineer, Backend Systems</Title>
      <Description><![CDATA[<p>Role</p>
<p>We&#39;re on a mission to redefine enterprise software, and we&#39;re looking for a Principal Software Engineer, Backend Systems to help push the boundaries of what&#39;s possible.</p>
<p>If you love designing and scaling complex backend systems, have shipped major projects or entire products, and can think fluently about distributed systems, data modeling, API design, and integrations, this role is for you.</p>
<p>You&#39;ll architect and scale our Go/Postgres/Redis/GraphQL backend, working alongside world-class engineers and product minds to drive high-impact projects, lead critical design discussions, and collaborate directly with customers to shape a platform that&#39;s transforming supply chain, manufacturing, and beyond.</p>
<p>The industry is rooting for us, and you&#39;ll play a pivotal role in making it happen.</p>
<p>This is a high-autonomy, high-impact individual contributor role, but if you&#39;re interested in growing into people management, the opportunity is there depending on your interest and performance.</p>
<p>Product</p>
<p>We&#39;re building the AI-first ERP to replace decades-old giants like SAP and Oracle, transforming how enterprises operate.</p>
<p>Our platform can generate any enterprise workflow application in minutes,a dramatic leap from the 1-2 years it traditionally takes IT teams to build them,giving process owners in supply chain, manufacturing, and operations the power to standardize, streamline, and drive their work to completion, no matter the complexity.</p>
<p>This breakthrough is powered by our workflow, forms, and AI engines, as well as our in-house Large Tabular Model, a first-of-its-kind innovation.</p>
<p>Customers aren&#39;t just adopting our platform,they&#39;re clamoring for more, rapidly expanding their use cases as we enter an exhilarating growth phase.</p>
<p>As one user put it: “I’ve been waiting for this for 20 years.”</p>
<p>Culture and Compensation</p>
<p>We are a customer-obsessed, product-driven company that is building a flexible, hybrid/remote culture to enable the brightest minds in the industry.</p>
<p>We are particularly interested in candidates based in our hubs of Seattle, San Francisco and New York, but we will consider candidates who live anywhere in the US, Canada, or Mexico.</p>
<p>We have industry-leading compensation packages, including equity and health benefits.</p>
<p>We are willing to sponsor US work authorization if needed.</p>
<p>Requirements</p>
<ul>
<li><p>M.S. in Computer Science or a related field (B.S. in Computer Science or a related field will be considered with substantial relevant experience)</p>
</li>
<li><p>5+ years of industry experience as a backend software engineer, with a focus on large-scale, user-facing web applications in companies like Slack, Uber, or similar</p>
</li>
<li><p>Proven experience in the architecture and design of large data systems, particularly for software as a service (SaaS)</p>
</li>
<li><p>Extensive experience in database systems development, data modeling, distributed systems and building robust application backends</p>
</li>
<li><p>Fluency with databases, APIs and modern backend technologies (experience with Go and GraphQL is strongly preferred, with the ability to quickly learn new technologies as needed)</p>
</li>
<li><p>A builder&#39;s spirit (you have a track record of building projects for fun, staying updated with open-source developments, etc.)</p>
</li>
<li><p>Ability to lead projects independently and collaboratively in a fast-paced startup environment</p>
</li>
<li><p>Excellent written and verbal communication skills</p>
</li>
<li><p>Strong enthusiasm for continuous learning and professional growth and for mentoring peers to help them grow as engineers</p>
</li>
</ul>
<p>Responsibilities</p>
<ul>
<li><p>Architect, design and develop scalable and robust backend systems for large data software-as-a-service (SaaS) applications, ensuring high performance and reliability.</p>
</li>
<li><p>Collaborate with cross-functional teams including Design, Product Management and industry experts to build high-quality product features.</p>
</li>
<li><p>Lead and mentor a team of engineers, providing guidance and expertise in backend development, database systems and distributed systems.</p>
</li>
<li><p>Stay abreast of emerging technologies and industry trends, incorporating new developments into the backend architecture and processes where appropriate.</p>
</li>
<li><p>Participate in code reviews, technical discussions and decision-making processes to maintain high standards of code quality and best practices.</p>
</li>
<li><p>Drive the adoption of best practices in backend development, data modeling and API design, ensuring the scalability and maintainability of the system.</p>
</li>
<li><p>Champion a culture of innovation, encouraging and leading initiatives to explore new technologies and improve existing systems.</p>
</li>
</ul>
<p>Additional Information</p>
<p>If this sounds exciting, please apply and we&#39;ll get back to you promptly if we see a fit!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>USD 175,000-200,000 per year</Salaryrange>
      <Skills>M.S. in Computer Science or a related field, 5+ years of industry experience as a backend software engineer, Proven experience in the architecture and design of large data systems, Extensive experience in database systems development, data modeling, distributed systems and building robust application backends, Fluency with databases, APIs and modern backend technologies, Go, GraphQL, Postgres, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Regrello</Employername>
      <Employerlogo>https://logos.yubhub.co/regrello.com.png</Employerlogo>
      <Employerdescription>Regrello is a 60-person startup that is reimagining automation in supply chains and manufacturing.</Employerdescription>
      <Employerwebsite>https://regrello.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/regrello/3115193b-6b7b-4e03-bfcd-8bfab06e6e55</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>eadcc17e-a17</externalid>
      <Title>Engineering Manager, Data Systems</Title>
      <Description><![CDATA[<p>At KoBold Metals, we believe making mineral exploration data broadly accessible to humans and machines will enable systematic exploration and materially improved exploration success rates. This role is a key ingredient to this strategy. As an early Engineering Manager on the Data Systems Engineering team, you will have the opportunity to shape how we build our product and organization to realize this vision. Through development of our robust data systems, we will empower KoBold to unlock invaluable insights and streamline intricate scientific processes. Collaborating with our exceptional team of data scientists and geologists, you will have the opportunity to tackle complex scientific problems head-on and collectively pave the way for the discoveries of vital energy transition metals like lithium, copper, nickel, and cobalt. Together we can shape the future of mineral exploration and contribute to building a sustainable world.</p>
<p>Responsibilities of this role include:
Thoughtfully build and lead a team of high-performing software engineers, hiring and growing diverse individuals on your team
Leverage your team in collaboration with data scientists, geoscientists, and other software engineers to plan, prioritize, and deliver impactful projects that accelerate our exploration efforts
Work with the Head of Data Systems Engineering and interdisciplinary leaders across the company on our strategy to advance both our technical product and organization
Collaborate with cross functional peers (geoscientists and data scientists) across the company, learning about mineral exploration. Coach your team to better partner across the company.
Spend roughly 1/3rd of your time contributing to projects as an individual contributor</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,000-$240,000</Salaryrange>
      <Skills>production software systems, engineering leadership, distributed cloud data systems, database and data storage technologies, large geospatial or geoscientific datasets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold Metals</Employername>
      <Employerlogo>https://logos.yubhub.co/koboldmetals.com.png</Employerlogo>
      <Employerdescription>KoBold Metals is a privately held mineral exploration company and largest independent exploration technology developer. It has a portfolio of over 60 projects.</Employerdescription>
      <Employerwebsite>https://www.koboldmetals.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4678567005</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7619176a-424</externalid>
      <Title>Forward Deployed Engineer</Title>
      <Description><![CDATA[<p>You will spend the majority of your time embedded with Hebbia&#39;s most strategic customers, building the last mile of our platform for their specific workflows, data, and domain. This is a hands-on engineering role. You write production code, you ship it, you own it.</p>
<p>As a Forward Deployed Engineer, you are the bridge between Hebbia&#39;s platform and the real-world complexity of our customers&#39; environments. You sit with the customer&#39;s team, understand their hardest problems, and build solutions that make Hebbia indispensable. Then you bring what you&#39;ve learned back to our engineering and product teams to make the platform better for everyone.</p>
<p>This role is for engineers who want to combine deep technical work with direct customer impact. You will see your code create value in days, not months. The FDE team operates at the intersection of engineering and go-to-market. You will work closely with our core engineering team,shared code review, architecture alignment, deploy pipelines,and with our account teams who direct where you deploy and what you focus on. Our team works in person 5 days a week at our offices in NYC and SF.</p>
<p>Responsibilities:</p>
<ul>
<li>Embed with strategic accounts to deeply understand their domain, data, and workflows</li>
<li>Build custom integrations, workflow automations, and domain-specific solutions on top of Hebbia&#39;s platform</li>
<li>Write production code that deploys through our CI/CD pipelines and meets our engineering standards</li>
<li>Own the technical relationship with the customer&#39;s team during your engagement</li>
<li>Prototype fast, validate with the customer, iterate, and ship</li>
<li>Return from engagements and work with engineering and product to generalize reusable patterns into platform capabilities</li>
<li>Participate in code review, on-call rotation, and architecture discussions alongside core engineering</li>
<li>Build connectors to customer data sources and document management systems</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years software development experience at a venture-backed startup or top technology firm</li>
<li>Strong full-stack engineering skills. You build across the stack: APIs, data pipelines, frontend when needed, infrastructure when needed.</li>
<li>Comfortable working in ambiguity. Customer problems are messy and underspecified. You figure it out.</li>
<li>High customer empathy. You enjoy sitting with users, understanding their workflows, and translating pain points into technical solutions.</li>
<li>Fast and pragmatic. You prototype, validate, and ship in days and weeks, not quarters.</li>
<li>Strong communicator. You are the primary technical point of contact for the customer. You can talk to both engineers and executives.</li>
<li>Experience with cloud platforms (e.g., AWS) and modern backend technologies (Python, TypeScript, Go)</li>
<li>Experience with data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.) is a plus</li>
<li>Experience with LLMs, RAG systems, or applied AI is a plus but not required</li>
<li>Prior experience in finance, legal, or consulting domains is a plus</li>
<li>Experience with customer-facing engineering roles (solutions engineering, professional services, or similar) is a plus</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 to $300,000</Salaryrange>
      <Skills>Full-stack engineering, Cloud platforms (e.g., AWS), Modern backend technologies (Python, TypeScript, Go), Data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.), Customer-facing engineering roles (solutions engineering, professional services, or similar), LLMs, RAG systems, or applied AI, Finance, legal, or consulting domains</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform that generates alpha and drives upside for investors and bankers. Founded in 2020, it powers investment decisions for large asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4679338005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4528f6ba-6ed</externalid>
      <Title>Field Technology Director/ CTO  – Strategic Industries</Title>
      <Description><![CDATA[<p>Job Title: Field Technology Director/ CTO - Strategic Industries</p>
<p>Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it&#39;s at Cresta.</p>
<p>About the Role: The Field Technology Director/ CTO - Strategic Industries role at Cresta is a senior technical leadership position that serves as the bridge between our customers and field sales teams. It is both strategic and customer-facing, focused on aligning Cresta&#39;s technology strategy with customer needs and market trends. This role also provides critical feedback to Cresta&#39;s Engineering and Product organizations to inform roadmap and feature prioritization.</p>
<p>As a trusted advisor and strategic partner, the Field CTO helps customers understand, adopt, and maximize value from Cresta&#39;s technology solutions. The role blends deep technical expertise with strong communication and business acumen to drive technology adoption, influence product direction, and build long-term customer relationships.</p>
<p>Responsibilities:</p>
<p>Customer &amp; Field Engagement</p>
<ul>
<li>Act as the primary technical executive interface for strategic customers.</li>
<li>Translate business challenges into technical solutions using Cresta&#39;s products and services.</li>
<li>Partner with Sales, Customer Success, and Product teams to ensure technology alignment with customer needs.</li>
<li>Deliver executive-level presentations, workshops, and architecture sessions to customer stakeholders.</li>
</ul>
<p>Technology Evangelism</p>
<ul>
<li>Represent Cresta&#39;s technology vision at conferences, industry events, and customer meetings.</li>
<li>Demonstrate thought leadership on trends in AI, cloud computing, data analytics, and security.</li>
<li>Create and deliver compelling narratives that showcase how Cresta&#39;s technology drives innovation and business outcomes.</li>
</ul>
<p>Product &amp; Engineering Partnership</p>
<ul>
<li>Gather and synthesize customer feedback to inform product and engineering priorities.</li>
<li>Identify and address gaps in technology adoption or deployment through cross-functional collaboration.</li>
<li>Influence future product strategy with insights derived from real-world field engagements.</li>
</ul>
<p>Strategic Leadership</p>
<ul>
<li>Serve as a trusted advisor to CXO-level executives on digital transformation and innovation.</li>
<li>Collaborate with internal leadership to align go-to-market strategies with customer technology trends.</li>
<li>Mentor and support solution architects and customer engineers.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>10+ years in technology leadership roles (CTO, Architect, or equivalent).</li>
<li>Strong expertise in enterprise architecture, cloud platforms, AI/ML, and data systems.</li>
<li>Deep understanding of SaaS, APIs, and modern software development practices.</li>
<li>Proven ability to influence C-suite stakeholders and technical decision-makers.</li>
<li>Exceptional presentation and executive communication skills.</li>
<li>Deep understanding of how enterprises leverage Generative AI solutions.</li>
<li>Extensive experience with large-scale enterprise software implementations and complex use cases.</li>
<li>Strong analytical and problem-solving abilities.</li>
<li>Experience managing and advising on large-scale programs with multiple stakeholders and dependencies.</li>
</ul>
<p>Success Metrics:</p>
<ul>
<li>Revenue growth and deal win rates within key accounts.</li>
<li>Thought leadership visibility (talks, publications, events).</li>
<li>Contributions to product innovation and roadmap evolution.</li>
<li>Customer adoption and satisfaction with Cresta&#39;s technology solutions.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<p>We offer a comprehensive and people-first benefits package to support you at work and in life:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>
<li>Flexible PTO to take the time you need, when you need it.</li>
<li>Paid parental leave for all new parents welcoming a new child.</li>
<li>Retirement savings plan to help you plan for the future.</li>
<li>Remote work setup budget to help you create a productive home office.</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>
<li>In-office meal program and commuter benefits provided for onsite employees.</li>
</ul>
<p>Compensation at Cresta:</p>
<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</p>
<p>Compensation for this position includes a Base salary + Bonus + Equity.</p>
<p><strong>We are hiring for multiple levels for this role and title and compensation will depend upon experience.</strong></p>
<p>Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable. Your recruiter can provide further details. In addition, total compensation includes a comprehensive benefits package for you and your family.</p>
<p>We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates&#39; personal and financial information through fake interviews and offers. All Cresta recruiting email communications will always come from the @cresta.ai domain. Any outreach claiming to be from Cresta via other sources should be ignored. If you are uncertain whether you have been contacted by an official Cresta employee, reach out to recruiting@cresta.ai</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise architecture, cloud platforms, AI/ML, data systems, SaaS, APIs, modern software development practices, Generative AI solutions, large-scale enterprise software implementations, complex use cases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a technology company that specializes in AI-powered contact center solutions.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5012699008</Applyto>
      <Location>United States, Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1eef0b95-ff4</externalid>
      <Title>Senior Solutions Architect</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI. The Senior Solutions Architect role at Cresta is dynamic and integral, requiring deep understanding of Cresta&#39;s capabilities and their integration with enterprise systems.</p>
<p><strong>About the role:</strong> As part of a selective and growing team, Enterprise Architects also help define sales and execution processes for effective land-and-expand strategies.</p>
<p><strong>Responsibilities:</strong> Leveraging domain expertise in Customer Experience (CX) and related ecosystems (e.g., Call Center infrastructure, Chat platforms, AWS services, Workforce Management, Automation, ChatBots, CRM, Integrations). Serving as the authoritative source on Cresta&#39;s platform integrations and their technical benefits. Owning the Technical Architecture blueprint from pre-sales to post-sales, ensuring technical wins by managing stakeholder expectations and collaborating across multiple levels (from end-users to executives) and with internal teams (Pre-sales Solution Engineers, engineering, and success organizations). Pre-sales: problem solving with end-customers to determine best solution architecture and integration approach. Post-sales: supporting the technical implementation team to help get customers up and running on Cresta. Along the way, closely collaborating with Sales, Product, Marketing, and Engineering in order to meet existing, new, and future customer needs.</p>
<p><strong>Qualifications We Value:</strong> Experience with Telephony and Contact Center Infrastructure (AWS Connect, Genesys, Five9, etc.). Good understanding of integration options, APIs, iPaaS, UI Integrations. Highly experienced with AWS, GCP, Salesforce. Strong expertise in enterprise architecture, cloud platforms, AI/ML, and data systems. Has extensive experience in large scale enterprise software implementations and solutions architecture. Deep understanding of how enterprises leverage Generative AI solutions. Is highly organized: able to manage complex internal and external processes with many different stakeholders and timelines ensuring that all parties are kept in the loop with clear notes, action items, and next steps to keep projects on track and successful. Is able to build strong external relationships with external technical stakeholders and in general able to take a consultative and strategic approach to solving customer problems. Experience managing and advising on large-scale programs with multiple stakeholders and dependencies. Is willing to do some travel (if/when the world goes back to normal) but before then be willing to be on frequent video calls with customers in EST - PST time zones.</p>
<p><strong>Product &amp; Engineering Partnership:</strong> Gather and synthesize customer feedback to inform product and engineering priorities. Identify and address gaps in technology adoption or deployment through cross-functional collaboration. Influence future product strategy with insights derived from real-world field engagements. Report directly to the Field CTO and should be able to manage delivery for strategic accounts end-to-end.</p>
<p><strong>Perks &amp; Benefits:</strong> We offer a comprehensive and people-first benefits package to support you at work and in life: Comprehensive medical, dental, and vision coverage with plans to fit you and your family. Paid parental leave for all new parents welcoming a new child. Remote work setup budget to help you create a productive home office. Monthly wellness and communication stipend to keep you connected and balanced. 20 days of vacation time to promote a healthy work-life blend.</p>
<p><strong>Compensation at Cresta:</strong> Cresta’s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable. Your recruiter can provide further details. In addition, total compensation includes a comprehensive benefits package for you and your family.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Telephony and Contact Center Infrastructure, AWS Connect, Genesys, Five9, Integration options, APIs, iPaaS, UI Integrations, AWS, GCP, Salesforce, Enterprise architecture, Cloud platforms, AI/ML, Data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a platform for contact centers to unlock customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4964429008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>be821069-a7f</externalid>
      <Title>Asset Data Engineer</Title>
      <Description><![CDATA[<p>Join the Asset Data team and build the streaming data infrastructure that powers Anchorage&#39;s digital asset platform. You&#39;ll design systems that ingest real-time blockchain and market data from diverse providers, transforming raw feeds into certified, trusted data products.</p>
<p>We&#39;re creating contract-governed supply chains that let us onboard new assets and providers quickly while maintaining the low-latency, high-availability SLOs our business depends on.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build streaming data pipelines for blockchain data (onchain transactions, staking rewards, validator info) and market data (prices, trades, order books)</li>
<li>Design and implement data contracts and validation gates that enforce quality and schema compliance at ingestion points</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Collaborate on designing the architecture for standardized ingestion patterns that enable rapid onboarding of new blockchains and market data feeds</li>
<li>Establish redundancy and failover patterns to meet Tier 1 availability and freshness SLOs for critical data products</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Collaborate with Protocols, Trading, and Custody teams to understand their data needs and design certified data products with clear SLAs</li>
<li>Partner with Data Platform team on orchestration, storage patterns (BigLake), and metadata management (Atlan)</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Advocate for contract-governed data supply chains and help establish engineering standards for producer patterns across the org</li>
<li>Contribute to architectural decisions and help mature the team&#39;s practices around observability, testing, and operational excellence</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5-7+ years building streaming or high-throughput data systems: You have experience designing and operating production data pipelines that handle large volumes with low latency and high reliability</li>
<li>Solid backend engineering skills: You&#39;re proficient in Go or Python and have built services that interact with streaming infrastructure (Kafka, pub/sub, websockets, REST APIs)</li>
<li>Blockchain data familiarity: You understand blockchain concepts and are comfortable working with on-chain data (transactions, events, staking, validators) across multiple chains with different data models</li>
<li>Data engineering adjacent skills: You&#39;re comfortable with data transformation patterns, schema evolution, and working with cloud data warehouses (BigQuery) and storage systems (GCS, BigLake)</li>
<li>Operational mindset: You have experience deploying and operating services on cloud platforms (preferably GCP), with strong practices around monitoring, alerting, and incident response</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Staking data expertise: You&#39;ve worked with staking rewards, validator data, or proof-of-stake blockchain infrastructure</li>
<li>Market data systems: You&#39;ve built systems that ingest and process market data (prices, trades, order books) from exchanges or data vendors</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Kafka, pub/sub, websockets, REST APIs, blockchain data, data transformation patterns, schema evolution, cloud data warehouses, storage systems, stake data expertise, market data systems, infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/82139746-fb0e-44b9-bbb6-ae078e5d251a</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8da705c0-ccb</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>Are you passionate about building infrastructure that powers billions of ad impressions daily? Join us to shape the backbone of a rapidly growing ad platform—where scale, reliability, and data-driven innovation are at the heart of everything we do.</p>
<p>As a Principal Software Engineer on the Bing Ads team, you will be responsible for designing and developing near real-time services, preparing data stores, and integrating them with other ad-serving components. Collaboration between and across teams is essential part of this role, as you will engage with partners to meet mutual objectives.</p>
<p>This role will enable you to gain insights into the Bing ad serving platform, collaborate closely with data scientists, and develop expertise in working with individuals responsible for different components of the ad infrastructure. You will have the opportunity to grow your skills, learn from industry experts, and continuously expand your knowledge in a dynamic and innovative environment.</p>
<p>This role allows flexible working hours with partial work from home.</p>
<p>Responsibilities:</p>
<ul>
<li>Independently implement high-performance solutions across teams while maintaining a quality checklist.</li>
<li>Create and monitor telemetry data and influence analytics to better identify patterns that reveal errors and unexpected problems.</li>
<li>Lead by example and mentor others to produce extensible and maintainable code used across products.</li>
<li>Spearhead efforts to optimize, debug, refactor, and reuse code to improve performance, maintainability, effectiveness, and return on investment (ROI).</li>
<li>Oversee the design and development of products, identifying other teams and technologies that will be leveraged, how they will interact, and when your system may provide support to others.</li>
<li>Lead efforts to determine back-end dependencies associated with the product, ensuring appropriate security and performance, driving reliability in the solutions, and optimizing dependency chains for the solution.</li>
<li>Respond to incidents and complex issues by identifying and troubleshooting the issue, deploying the appropriate fixes, and implementing automations to prevent recurring issues.</li>
<li>Follow prescriptive guidance for security, privacy, and compliance standards.</li>
<li>Collaborate within and across teams by proactively and systematically sharing information.</li>
<li>Resolve conflicts across teams and engage with partners to meet mutual objectives.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR equivalent experience.</li>
<li>4+ years technical experience working with large-scale cloud or distributed data systems.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR Bachelor’s Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR equivalent experience.</li>
<li>8+ years technical experience in software development, service engineering, or systems engineering.</li>
<li>3+ years experience in data science, data modeling, or data engineering.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C#, Java, C, C++, Python, JavaScript, large-scale cloud or distributed data systems, data science, data modeling, data engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-10/</Applyto>
      <Location>Multiple Locations, United States</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b220ac50-0a0</externalid>
      <Title>Technical Abuse Investigator</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco; New York City; Remote - US</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Intelligence &amp; Investigations</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$198K – $220K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.</p>
<p>The Intelligence and Investigations team supports this mission by detecting, investigating, and disrupting the misuse of our products, particularly critical or novel harms. Our work enables partner teams to develop data-backed model policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, rewarding applications.</p>
<p><strong>About the Role</strong></p>
<p>As a Technical Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting, investigating, and disrupting malicious use of OpenAI’s platform. You will further scale parts of the investigative process to help our team disrupt harm at scale. This role combines traditional investigative judgment with strong technical fluency: much of the work involves navigating complex datasets to surface actionable abuse signals, not just reviewing individual reports.</p>
<p>In addition to conducting investigations directly, this role is explicitly designed to act as a force multiplier for the broader investigations team. You will be scaling or automating highly manual, important and nuanced processes. You will design and implement lightweight technical solutions—such as notebook templates, data pipelines or internal utilities—that enable specialized investigators to identify, track, and action abuse at a greater scale than a single investigator can currently achieve. Success in this role is measured not only by investigations completed, but by how effectively your work enables you and your team members to operate more efficiently and consistently.</p>
<p>You will work closely with engineering, legal, investigations, security, and policy partners to respond to time-sensitive escalations, investigate activity that falls outside existing safeguards, and translate investigative insights into scalable detection and enforcement strategies.</p>
<p>This role includes participation in an on-call rotation to handle urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise disturbing material. This role will work <strong>PST</strong> and is open to remote work within the United States, though we heavily prefer candidates based in San Francisco or New York.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Detect, investigate and disrupt abuse and harm with policy, legal, global affairs, security, and engineering teams via complex datasets.</li>
</ul>
<ul>
<li>Develop and iterate on abuse signals and investigative methods, scaling one-off insights to reduce manual effort and expand coverage.</li>
</ul>
<ul>
<li>Build and maintain lightweight technical solutions (e.g., SQL/ Python data pipelines, investigation templates, dashboards, or internal utilities) for investigators focused on specific harm domains.</li>
</ul>
<ul>
<li>Develop a deep understanding of OpenAI’s products, data systems, and enforcement mechanisms, and collaborate with engineering and data teams to improve investigative tooling, data quality, and workflows.</li>
</ul>
<ul>
<li>Communicate investigation findings effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries</li>
</ul>
<ul>
<li>Rotate (in-frequently) into an incident response role that requires rapid threat triaging, investigation, mitigation, sound judgement and concise briefing to senior leadership.</li>
</ul>
<ul>
<li>Be someone people enjoy working with.</li>
</ul>
<ul>
<li>Proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have a strong background in computer science, software engineering, or a related field.</li>
<li>Have experience with data analysis, machine learning, or other technical skills relevant to the role.</li>
<li>Are able to work effectively in a fast-paced, dynamic environment.</li>
<li>Are able to communicate complex technical information to non-technical stakeholders.</li>
<li>Are able to work independently and as part of a team.</li>
<li>Are able to adapt to changing priorities and deadlines.</li>
<li>Are able to maintain confidentiality and handle sensitive information.</li>
<li>Are able to work in a remote environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$198K – $220K</Salaryrange>
      <Skills>computer science, software engineering, data analysis, machine learning, technical skills, investigative judgment, complex datasets, abuse signals, investigative methods, lightweight technical solutions, SQL, Python, data pipelines, investigation templates, dashboards, internal utilities, data systems, enforcement mechanisms, engineering, data teams, investigative tooling, data quality, workflows, written briefs, data-backed recommendations, escalation summaries, incident response, rapid threat triaging, investigation, mitigation, sound judgement, concise briefing, senior leadership, team dynamics, high-pressure environments, artificial intelligence, natural language processing, computer vision, deep learning, neural networks, data science, statistics, probability, mathematics, algorithm design, software development, testing, debugging, version control, agile development, scrum, kanban, project management, team leadership, communication, public speaking, writing, editing, proofreading</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and deploying artificial intelligence in a way that benefits all of humanity. It is a privately held company with a large team of engineers, researchers, and other professionals.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/492ffc24-6b6e-4aa0-b31c-2a29a550b086</Applyto>
      <Location>San Francisco; New York City; Remote - US</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>5534ca54-685</externalid>
      <Title>Data Scientist, User Operations</Title>
      <Description><![CDATA[<p><strong>Data Scientist, User Operations</strong></p>
<p><strong>Location</strong></p>
<p>New York City; San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Data Science</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>OpenAI’s User Operations organization is building the data and intelligence layer behind AI-assisted operations — the systems that decide when automation should help users, when humans should step in, and how both improve over time. Our flagship platform is transforming customer support into a model for “agent-first” operations across OpenAI.</p>
<p><strong>About the Role</strong></p>
<p>As a Data Scientist on User Operations, you’ll design the models, metrics, and experimentation frameworks that power OpenAI’s human-AI collaboration loop. You’ll build systems that measure quality, optimize automation, and turn operational data into insights that improve product and user experience at scale. You’ll partner closely with Support Automation Engineering, Product, and Data Engineering to ensure our data systems are production-grade, trusted, and impactful.</p>
<p>This role is based in San Francisco or New York City. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>Why it matters</strong></p>
<p>Every conversation users have with OpenAI products produces signals about how humans and AI interact. User Ops Data Science turns those signals into insights that shape how we support users today and design agentic systems for tomorrow. This is a unique opportunity to help define how AI collaboration at scale is measured and improved inside OpenAI.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and own metrics, classifiers, and data pipelines that determine automation eligibility, effectiveness, and guardrails.</li>
</ul>
<ul>
<li>Design and evaluate experiments that quantify the impact of automation and AI systems on user outcomes like resolution quality and satisfaction.</li>
</ul>
<ul>
<li>Develop predictive and statistical models that improve how OpenAI’s support systems automate, measure, and learn from user interactions.</li>
</ul>
<ul>
<li>Partner with engineering and product teams to create feedback loops that continuously improve our AI agents and knowledge systems.</li>
</ul>
<ul>
<li>Translate complex data into clear, actionable insights for leadership and cross-functional stakeholders.</li>
</ul>
<ul>
<li>Develop and socialize dashboards, applications, and other ways of enabling the team and company to answer product data questions in a self-serve way</li>
</ul>
<ul>
<li>Contribute to establishing data science standards and best practices in an AI-native operations environment.</li>
</ul>
<ul>
<li>Partner with other data scientists across the company to share knowledge and continually synthesize learnings across the organization</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>10+ years of experience in data science roles within product or technology organizations.</li>
</ul>
<ul>
<li>Expertise in statistics and causal inference, applied in both experimentation and observational causal inference studies.</li>
</ul>
<ul>
<li>Expert-level SQL and proficiency in Python for analytics, modeling, and experimentation.</li>
</ul>
<ul>
<li>Proven experience designing and interpreting experiments and making statistically sound recommendations.</li>
</ul>
<ul>
<li>Experience building data systems or pipelines that power production workflows or ML-based decisioning.</li>
</ul>
<ul>
<li>Experience developing and extracting insights from business intelligence tools, such as Mode, Tableau, and Looker.</li>
</ul>
<ul>
<li>Strategic and impact-driven mindset, capable of translating complex business problems into actionable frameworks.</li>
</ul>
<ul>
<li>Ability to build relationships with diverse stakeholders and cultivate strong partnerships.</li>
</ul>
<ul>
<li>Strong communication skills, including the ability to bridge technical and non-technical stakeholders and collaborate across various functions to ensure business impact.</li>
</ul>
<ul>
<li>Ability to operate effectively in a fast-moving, ambiguous environment with limited structure.</li>
</ul>
<ul>
<li>Strong communication skills and the ability to translate complex data into stories for non-technical partners.</li>
</ul>
<p><strong>Nice-to-haves:</strong></p>
<ul>
<li>Familiarity with large language models or AI-assisted operations</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>statistics, causal inference, SQL, Python, data systems, pipelines, production workflows, ML-based decisioning, business intelligence tools, Mode, Tableau, Looker, large language models, AI-assisted operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and applying artificial intelligence in various fields. The company is headquartered in San Francisco and has a team of experienced engineers and researchers.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/dd418f6d-7212-491c-944c-aeac9dc066ec</Applyto>
      <Location>New York City; San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ec06a431-7fa</externalid>
      <Title>Software Engineer - Privacy &amp; Compliance</Title>
      <Description><![CDATA[<p><strong>Software Engineer - Privacy &amp; Compliance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco; Seattle</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p>We’re looking for a <strong>Software Engineer</strong> to architect and build backend systems that enforce data privacy and automate compliance at scale. You’ll work closely with product, infrastructure, security, and legal teams to embed privacy-by-design into our data and access layers.</p>
<p>This is a hands-on, high-impact role for an experienced engineer who is passionate about protecting user data while enabling innovation.</p>
<p><strong><strong>What You’ll Do</strong></strong></p>
<ul>
<li>Design, build, and operate backend services that enforce policy-driven data access, lifecycle controls, and privacy protections.</li>
</ul>
<ul>
<li>Develop distributed authorization and identity-aware enforcement mechanisms integrated directly into data services and control planes.</li>
</ul>
<ul>
<li>Implement auditability, policy hooks, and enforcement observability to ensure compliance is continuously verifiable.</li>
</ul>
<ul>
<li>Partner with Security, Legal, and Compliance to convert privacy requirements into scalable technical designs and developer-friendly APIs.</li>
</ul>
<ul>
<li>Harden data platforms and backend services through schema-level controls and data handling constraints by default.</li>
</ul>
<ul>
<li>Collaborate with infrastructure teams to ensure consistent enforcement across systems while minimizing duplicated implementations.</li>
</ul>
<ul>
<li>Contribute patterns, libraries, and education that elevate trustworthy data access patterns across the organization.</li>
</ul>
<p><strong><strong>You Might Thrive in This Role If You Have</strong></strong></p>
<ul>
<li><strong>5+ years of industry experience</strong> building and operating backend or infrastructure systems in production.</li>
</ul>
<ul>
<li><strong>Strong software engineering fundamentals</strong>, with fluency in at least one major programming language (e.g., Python, Go, Rust, C++, Java).</li>
</ul>
<ul>
<li>Experience with distributed authorization, RBAC/ACL systems, encryption-based access, or policy engines.</li>
</ul>
<ul>
<li><strong>Familiarity with global privacy regulations</strong> and their architectural implications.</li>
</ul>
<ul>
<li><strong>Ability to influence and collaborate</strong> with teams across legal, compliance, product, and engineering.</li>
</ul>
<ul>
<li>A <strong>bias toward practical, impactful solutions</strong> that balance privacy protections with product needs.</li>
</ul>
<p><strong><strong>Nice to Have</strong></strong></p>
<ul>
<li>Experience with cloud platforms (e.g., Azure, AWS, GCP) and large-scale data systems.</li>
</ul>
<ul>
<li>Background in security engineering, privacy engineering, or data governance.</li>
</ul>
<ul>
<li>Experience with control-plane or metadata-driven enforcement systems.</li>
</ul>
<ul>
<li>Exposure to data platforms or ML infrastructure.</li>
</ul>
<ul>
<li>Prior experience in a regulated or highly sensitive data environment.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>Python, Go, Rust, C++, Java, Distributed authorization, RBAC/ACL systems, Encryption-based access, Policy engines, Global privacy regulations, Cloud platforms, Large-scale data systems, Security engineering, Privacy engineering, Data governance, Control-plane or metadata-driven enforcement systems, Data platforms, ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/23b158fe-709e-4bf5-856c-d10953d32f60</Applyto>
      <Location>San Francisco, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>56bb0069-e56</externalid>
      <Title>Software Engineer, Scaled Abuse</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Scaled Abuse</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>The Applied team safely brings OpenAI&#39;s technology to the world. We released ChatGPT; Plugins; DALL·E; and the APIs for GPT-5, embeddings, and fine-tuning. We also operate inference infrastructure at scale. There&#39;s a lot more on the immediate horizon.</p>
<p>Our customers build fast-growing businesses around our APIs, which power product features that were never before possible. ChatGPT is a prime example of what is currently possible. We simultaneously ensure that our powerful tools are used responsibly. Safe deployment is more important to us than unfettered growth.</p>
<p><strong>About the role</strong></p>
<p>The Scaled Abuse team protects OpenAI’s products and customers by detecting, preventing, and responding to fraudulent and abusive behavior at scale. We build and operate the backend and data systems that power real-time detection, investigation workflows, and enforcement — balancing strong protections with a great user experience as the platform grows.</p>
<p>Our work sits at the intersection of engineering and abuse expertise: we partner closely with Trust &amp; Safety, Security, and Product to understand emerging attack patterns, translate messy signals into clear system behavior, and continuously harden our defenses. The problems are dynamic and ambiguous by default, so we value engineers who can quickly dive into an unfamiliar codebase, develop strong intuition about how it works end-to-end, and propose pragmatic improvements that make the entire stack more resilient.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience</li>
</ul>
<ul>
<li>Work closely with finance, security, product, research, and trust &amp; safety operations to holistically combat fraudulent and abusive actors on our system</li>
</ul>
<ul>
<li>Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries</li>
</ul>
<ul>
<li>Utilize GPT-5 and future models to more effectively combat fraud and abuse</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have at least 5 years of software engineering experience in backend and data systems.</li>
</ul>
<ul>
<li>Have at least 2 years experience in fraud or abuse analysis, investigation, and/or operations</li>
</ul>
<ul>
<li>Can dive into our codebase, intuit how it works, and be able to have a strong intuition for suggestions that will lead us to a stronger engineering position.</li>
</ul>
<ul>
<li>A voracious and intrinsic desire to learn and fill in missing skills. An equally strong talent for sharing that information clearly and concisely with others</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapidly changing conditions. You view changes as an opportunity to add structure and order when necessary</li>
</ul>
<ul>
<li>Experience in Machine Learning techniques is a plus, but not required</li>
</ul>
<p><strong>Our tech stack</strong></p>
<p>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills and the ability to quickly pick up new tools and frameworks.</p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>software engineering, backend and data systems, fraud or abuse analysis, investigation and/or operations, GPT-5 and future models, Machine Learning techniques, Terraform, Kubernetes, Azure, Python, Postgres, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through their products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/3c67f712-697d-48d8-b05c-01be896e61da</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e0a79286-3ae</externalid>
      <Title>Software Engineer, Online Storage</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Online Storage</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance related bonus for eligible employees and benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick and safe time (1 hour per 30 hours worked)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p><strong>About the Team</strong></p>
<p>We are the Online Storage team powering ChatGPT, Sora, and the OpenAI APIs. We’re a growing team set up to own the databases and online‑storage infrastructure that serve all our products.</p>
<p><strong>About the Role</strong></p>
<p>As OpenAI scales, we’re seeking experienced, problem‑solving engineers to build robust, high‑performance, and scalable database systems. Our ability to rapidly iterate on products while ensuring reliability and speed is key to our success.</p>
<p>You’ll work in a fast‑paced, collaborative environment, building systems that serve hundreds of millions of users globally, with a strong emphasis on safety, reliability, and performance.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and build highly scalable, reliable, and performant database</li>
</ul>
<ul>
<li>Design and build highly simple and intuitive APIs for the underlying database</li>
</ul>
<ul>
<li>Analyze and resolve performance and scalability bottlenecks to improve overall system efficiency</li>
</ul>
<ul>
<li>Debug, instrument, and fix system issues — from pinpointing root causes to delivering long-term solutions</li>
</ul>
<ul>
<li>Define technical strategy and guide the development of robust infrastructure that supports high-scale production systems and evolving business needs</li>
</ul>
<ul>
<li>Collaborate closely with product teams to deeply understand requirements and deliver impactful solutions</li>
</ul>
<ul>
<li>Boost engineering productivity by building intuitive tools and systems that empower fellow developers</li>
</ul>
<ul>
<li>Own the reliability of the systems you build, including participating in an on-call rotation to address critical incidents</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience building (and rebuilding) production systems to support new product capabilities and growing scale</li>
</ul>
<ul>
<li>Care deeply about the end-user experience and take pride in solving real customer needs</li>
</ul>
<ul>
<li>Embrace a humble, collaborative mindset and go the extra mile to support your teammates and the broader mission</li>
</ul>
<ul>
<li>Own problems end-to-end — you’re comfortable learning on the fly to fill gaps and get things done</li>
</ul>
<ul>
<li>Build internal tools that improve workflows when off-the-shelf solutions fall short</li>
</ul>
<ul>
<li>Have hands-on experience with distributed systems such as data storage, caching, search, or other backend infrastructure components</li>
</ul>
<ul>
<li>Prioritize the reliability, scalability, and performance of large-scale systems</li>
</ul>
<ul>
<li>Thrive in ambiguous, fast-paced environments and enjoy iterating rapidly on product and research initiatives</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>4+ years of industry experience, including 2+ years leading large-scale, complex projects or technical initiatives as an engineer or tech lead</li>
</ul>
<ul>
<li>Strong passion for building distributed systems at scale, with a focus on reliability, scalability, security, and continuous improvement</li>
</ul>
<ul>
<li>Expertise in systems programming, with hands-on experience in multi-threading and concurrency; proficiency in C++ and/or Python is highly preferred</li>
</ul>
<ul>
<li>Preferably, domain experience in areas such as databases, large-scale data systems, storage, caching, search, or other core components of distributed infrastructure</li>
</ul>
<ul>
<li>Excellent communication skills, with the ability to build consensus across diverse technical and non-technical stakeholders</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>C++, Python, Systems programming, Multi-threading, Concurrency, Distributed systems, Data storage, Caching, Search, Backend infrastructure components, Databases, Large-scale data systems, Storage, Caching, Search, Core components of distributed infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/bd1c8680-acf0-45d7-ad66-f301ea72c10c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>c57e505a-535</externalid>
      <Title>Software Engineer, Online Storage</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Online Storage</strong></p>
<p><strong>Location</strong></p>
<p>Seattle</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance related bonus for eligible employees and benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick and safe time (1 hour per 30 hours worked)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p><strong>About the Team</strong></p>
<p>We are the Online Storage team powering ChatGPT, Sora, and the OpenAI APIs. We’re a growing team set up to own the databases and online‑storage infrastructure that serve all our products.</p>
<p><strong>About the Role</strong></p>
<p>As OpenAI scales, we’re seeking experienced, problem‑solving engineers to build robust, high‑performance, and scalable database systems. Our ability to rapidly iterate on products while ensuring reliability and speed is key to our success.</p>
<p>You’ll work in a fast‑paced, collaborative environment, building systems that serve hundreds of millions of users globally, with a strong emphasis on safety, reliability, and performance.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and build highly scalable, reliable, and performant database</li>
</ul>
<ul>
<li>Design and build highly simple and intuitive APIs for the underlying database</li>
</ul>
<ul>
<li>Analyze and resolve performance and scalability bottlenecks to improve overall system efficiency</li>
</ul>
<ul>
<li>Debug, instrument, and fix system issues — from pinpointing root causes to delivering long-term solutions</li>
</ul>
<ul>
<li>Define technical strategy and guide the development of robust infrastructure that supports high-scale production systems and evolving business needs</li>
</ul>
<ul>
<li>Collaborate closely with product teams to deeply understand requirements and deliver impactful solutions</li>
</ul>
<ul>
<li>Boost engineering productivity by building intuitive tools and systems that empower fellow developers</li>
</ul>
<ul>
<li>Own the reliability of the systems you build, including participating in an on-call rotation to address critical incidents</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience building (and rebuilding) production systems to support new product capabilities and growing scale</li>
</ul>
<ul>
<li>Care deeply about the end-user experience and take pride in solving real customer needs</li>
</ul>
<ul>
<li>Embrace a humble, collaborative mindset and go the extra mile to support your teammates and the broader mission</li>
</ul>
<ul>
<li>Own problems end-to-end — you’re comfortable learning on the fly to fill gaps and get things done</li>
</ul>
<ul>
<li>Build internal tools that improve workflows when off-the-shelf solutions fall short</li>
</ul>
<ul>
<li>Have hands-on experience with distributed systems such as data storage, caching, search, or other backend infrastructure components</li>
</ul>
<ul>
<li>Prioritize the reliability, scalability, and performance of large-scale systems</li>
</ul>
<ul>
<li>Thrive in ambiguous, fast-paced environments and enjoy iterating rapidly on product and research initiatives</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>4+ years of industry experience, including 2+ years leading large-scale, complex projects or technical initiatives as an engineer or tech lead</li>
</ul>
<ul>
<li>Strong passion for building distributed systems at scale, with a focus on reliability, scalability, security, and continuous improvement</li>
</ul>
<ul>
<li>Expertise in systems programming, with hands-on experience in multi-threading and concurrency; proficiency in C++ and/or Python is highly preferred</li>
</ul>
<ul>
<li>Preferably, domain experience in areas such as databases, large-scale data systems, storage, caching, search, or other core components of distributed infrastructure</li>
</ul>
<ul>
<li>Excellent communication skills, with the ability to build consensus across diverse technical and non-technical stakeholders</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>systems programming, multi-threading and concurrency, C++ and/or Python, distributed systems, data storage, caching, search, backend infrastructure components, databases, large-scale data systems, storage, scalability, security, continuous improvement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/460b4295-3803-4dda-983d-3b0fea0b0fc4</Applyto>
      <Location>Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>700b3e35-b5b</externalid>
      <Title>Software Engineer, Integrity Foundations</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Integrity Foundations - London</strong></p>
<p><strong>Location</strong></p>
<p>London, UK</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$221K – $370K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the team</strong></strong></p>
<p>The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability</p>
<p>The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. We are creating a 0→1 team in London to architect next-generation systems that will support, leverage, and scale the work of experts in harms committed with our technology.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Design and build scaled foundational systems used in the detection, tracking, and enforcement of harm using our technologies.</li>
</ul>
<ul>
<li>Work closely with and learn from technical and non-technical experts on harms we are observing, and that stem from our platforms, to inform your designs and implementation.</li>
</ul>
<ul>
<li>Leverage OpenAI’s most advanced technologies to automate and augment our abilities to detect and reason about complex harms quickly, accurately, and with minimum human intervention.</li>
</ul>
<ul>
<li>Collaborate with policy, trust and safety operations, legal, investigations, and harm-specialised engineers and data scientists to holistically combat abusive actors and customers using OpenAI’s technology.</li>
</ul>
<ul>
<li>Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries.</li>
</ul>
<p><strong><strong>You might thrive in this role if you:</strong></strong></p>
<ul>
<li>Have at least 5 years of software engineering experience in backend and data systems.</li>
</ul>
<ul>
<li>Have at least 2 years experience in trust and safety analysis, investigation, and/or operations.</li>
</ul>
<ul>
<li>Are excited to learn from top experts in the world on harms committed with AI, and to collaborate in an interdisciplinary team including technical and non-technical roles to combat these harms.</li>
</ul>
<ul>
<li>Can dive into our codebase, intuit how it works, and be able to have a strong intuition for suggestions that will lead us to a stronger engineering position.</li>
</ul>
<ul>
<li>Have a voracious and intrinsic desire to learn and fill in missing skills. An equally strong talent for sharing that information clearly and concisely with others</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapidly changing conditions. You view changes as an opportunity to add structure and order when necessary.</li>
</ul>
<ul>
<li>Experience in Machine Learning techniques is a plus, but not required.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$221K – $370K • Offers Equity</Salaryrange>
      <Skills>software engineering, backend and data systems, trust and safety analysis, investigation, operations, Machine Learning techniques, AI research, deployment, data science, engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/46703db4-6023-4ac6-93a8-22dc95009945</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9645d5ee-445</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Principal Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will be responsible for architecting scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
<li>Build real-time and batch personalization engines that adapt Copilot&#39;s behavior.</li>
<li>Collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
<li>Experience using Machine Learning frameworks, including experience using, deploying, and scaling language learning models, either personally or professionally.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, Machine Learning frameworks, experience with cloud infrastructure, experience with cybersecurity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. The company&apos;s mission is to empower every person and every organization on the planet to achieve more. Microsoft is a leader in the technology industry, with a strong presence in the fields of cloud computing, artificial intelligence, and cybersecurity.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>68e759f6-179</externalid>
      <Title>Principal Software Engineer - Data, Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot&#39;s intelligence.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will work across Microsoft AI and Copilot teams to build scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in backend technologies.</li>
<li>Familiarity with applied AI and its unique challenges.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Benefits and other compensation.</li>
<li>Opportunity to work on cutting-edge AI projects.</li>
<li>Collaborative and inclusive work environment.</li>
<li>Professional development opportunities.</li>
<li>Flexible work arrangements.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, backend technologies, applied AI, Kafka, Spark, Flink, large scale data systems, AI platforms, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-4/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>97c676ce-653</externalid>
      <Title>Principal Software Engineer - Data, Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot&#39;s intelligence.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will work across Microsoft AI and Copilot teams to build scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in backend technologies.</li>
<li>Familiarity with applied AI and its unique challenges.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Methodical approach to problem-solving.</li>
<li>Ability to identify, analyze, and resolve complex technical issues.</li>
<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Benefits and other compensation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, backend technologies, applied AI, Kafka, Spark, Flink, large scale data systems, AI platforms, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>2902359a-64d</externalid>
      <Title>Member of Technical Staff, Infrastructure Data &amp; Analytics</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Infrastructure Data &amp; Analytics to join their MAI SuperIntelligence Team. This role sits at the heart of strategic decision-making, turning raw telemetry into trusted, decision-quality insights on utilization, capacity, readiness, and efficiency. You&#39;ll work directly with leadership to shape the company&#39;s direction in the Superintelligence space.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff, Infrastructure Data &amp; Analytics, you will act as the technical lead and owner for infrastructure analytics across compute, storage, and networking. You will design and build durable, scalable data pipelines that ingest telemetry from clusters, schedulers, health systems, and capacity trackers into Data Warehouse. You will define and standardize core metrics and semantics (e.g., utilization, occupancy, MFU, goodput, capacity readiness, delivery-to-production). You will architect and maintain self-service dashboards and APIs for fleet, cluster, and squad-level visibility. You will partner closely with stakeholders across Supercomputing Infra, Researchers, Strategy and Executives to ensure metrics reflect operational and business reality.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Act as the technical lead and owner for infrastructure analytics across compute, storage, and networking.</li>
<li>Design and build durable, scalable data pipelines that ingest telemetry from clusters, schedulers, health systems, and capacity trackers into Data Warehouse.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with data engineering, analytics, or data science, with increasing technical ownership in startup environment.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Distributed data processing frameworks and large-scale data systems.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong communication skills; can explain complex systems clearly to senior leader.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>Certain roles may be eligible for benefits and other compensation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>data engineering, analytics, data science, distributed data processing frameworks, large-scale data systems, ETL orchestration frameworks, Airflow, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that empowers every person and every organization on the planet to achieve more. With a growth mindset, they innovate to empower others and collaborate to realize their shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-infrastructure-data-analytics-mai-superintelligence-team/</Applyto>
      <Location>Multiple Locations, United States</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>759571da-f04</externalid>
      <Title>Senior Software Engineer - Microsoft AI, Copilot</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Senior Software Engineer to join their Microsoft AI, Copilot team in Mountain View. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Senior Software Engineer on the Microsoft AI, Copilot team, you will be responsible for designing and developing distributed systems and APIs that power adaptive, context-aware experiences across Microsoft AI. You will work across Microsoft AI and Copilot teams to build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will also collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing, and serving personalized signals.</li>
<li>Design, build, and maintain robust pipelines for telemetry, product usage, and experimentation data.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience building systems with coding in languages including, but not limited to, Python, C#, C++, Golang, Rust, Java.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range of $119,800 - $234,700 per year.</li>
<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
<li>Access to cutting-edge technology and tools.</li>
<li>Flexible work arrangements, including remote work options.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>Python, C#, C++, Golang, Rust, Java, large scale data systems, AI platforms, frameworks, APIs, machine learning, natural language processing, computer vision</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. The company is known for its Windows operating system, Office software suite, and Xbox gaming console. Microsoft is also a leader in cloud computing, artificial intelligence, and cybersecurity.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-microsoft-ai-copilot-3/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4f2a566f-ec6</externalid>
      <Title>Member of Technical Staff - Copilot Data &amp; Insights</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Copilot Data &amp; Insights at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Copilot Data &amp; Insights, you will be responsible for architecting scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights. You will design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features. You will own orchestration, monitoring, and DevOps for critical data workflows. You will design data models and APIs that enable customer loop insights using LLM(s). You will collaborate with privacy, security, and responsible AI teams to ensure customer insight is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights.</li>
<li>Design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, frameworks, APIs, Kafka, Spark, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is redefining the future of AI. They are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-copilot-data-insights/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>03dd7222-bf0</externalid>
      <Title>Member of Technical Staff - Copilot Data &amp; Insights</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Copilot Data &amp; Insights at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Copilot Data &amp; Insights, you will be responsible for architecting scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights. You will design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features. You will own orchestration, monitoring, and DevOps for critical data workflows. You will design data models and APIs that enable customer loop insights using LLM(s). You will collaborate with privacy, security, and responsible AI teams to ensure customer insight is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights.</li>
<li>Design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, frameworks, APIs, Kafka, Spark, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is redefining the future of AI. They are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-copilot-data-insights-3/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1c29c926-f4e</externalid>
      <Title>Member of Technical Staff - Copilot Data &amp; Insights</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Copilot Data &amp; Insights at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>This role focuses on building data pipelines, applications on customer-centric, automatically process customers insights using LLM(s) on Copilot and its features, generating insights, on top of Azure environments, dashboards, reporting and APIs that power adaptive, context-aware experiences across Microsoft AI. We aim to make Copilot feel like your Copilot — responsive to your preferences, workflows, and goals — while preserving privacy, security, performance, and scale.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights.</li>
<li>Design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, frameworks, APIs, Kafka, Spark, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-copilot-data-insights-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f361254f-dee</externalid>
      <Title>Principal Software Engineer - Data, Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot&#39;s intelligence.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will work across Microsoft AI and Copilot teams to build scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in backend technologies.</li>
<li>Familiarity with applied AI and its unique challenges.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Benefits and other compensation.</li>
<li>Opportunities for professional growth and development.</li>
<li>A positive, inclusive work environment.</li>
<li>A culture of innovation and collaboration.</li>
<li>A commitment to diversity, equity, and inclusion.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, backend technologies, applied AI, Kafka, Spark, Flink, large scale data systems, AI platforms, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-3/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-05</Postedate>
    </job>
    <job>
      <externalid>65307261-22a</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Senior Software Engineer, you will be responsible for architecting scalable, low-latency systems/data pipelines for ingesting, processing, and serving personalized signals. You will design, build, and maintain robust pipelines for telemetry, product usage, and experimentation data. You will also design data models and APIs that enable Copilot to reason about user context, preferences, and history.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing, and serving personalized signals.</li>
<li>Design, build, and maintain robust pipelines for telemetry, product usage, and experimentation data.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience building systems with coding in languages including, but not limited to, Python, C#, C++, Golang, Rust, Java.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $119,800 - $234,700 per year.</li>
<li>Benefits and other compensation.</li>
<li>Opportunity to work with a talented team of engineers and contribute to the development of cutting-edge AI technology.</li>
<li>Professional development opportunities.</li>
<li>Flexible work arrangements, including remote and hybrid options.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>Python, C#, C++, Golang, Rust, Java, large scale data systems, AI platforms, frameworks, APIs, machine learning, data science, cloud computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-microsoft-ai-copilot-2/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-05</Postedate>
    </job>
    <job>
      <externalid>9c31f5bd-938</externalid>
      <Title>Head of Electronics Systems &amp; Software</Title>
      <Description><![CDATA[<p>We are seeking an experienced Head of Electronics Systems &amp; Software to lead a small team of engineers delivering robust, high-performance electronic architectures for elite motorsport programs, including GT3, WEC, W2RC, and other high-profile racing and niche projects.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>You will own the end-to-end lifecycle of electronics systems—from concept and validation to trackside deployment—while driving software strategy, data systems, reliability, and compliance with series regulations.</p>
<ul>
<li>Lead, mentor, and develop a small team of systems and software engineers; set priorities, allocate resources, conduct performance reviews, and grow capability.</li>
<li>Establish clear delivery plans and engineering standards; ensure on-time, on-budget execution across programs.</li>
<li>Foster a blameless culture of continuous improvement, fast feedback, and reliability.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Several years of motorsport experience with direct program involvement in high-level motorsport such as GT3, WEC, W2RC, WRC (minimum 5–8+ years preferred).</li>
<li>Proven track record running a small team of systems and software engineers in a fast-paced race environment.</li>
<li>Strong background in embedded systems and vehicle networks; hands-on with ECU configuration, sensor/actuator integration, and diagnostics.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>permanent</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>embedded systems, vehicle networks, ECU configuration, sensor/actuator integration, diagnostics, motorsport experience, team leadership, software strategy, data systems, reliability</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>Prodrive</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.prodrive.com.png</Employerlogo>
      <Employerdescription>Prodrive is the world&apos;s leading independent motorsport company and the business behind some of the greatest names and achievements in motorsport over the last 40 years.</Employerdescription>
      <Employerwebsite>https://careers.prodrive.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.prodrive.com/vacancies/head-of-electronics-systems-software</Applyto>
      <Location>Banbury, Oxfordshire, England</Location>
      <Country></Country>
      <Postedate>2025-12-22</Postedate>
    </job>
  </jobs>
</source>