<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>542096f5-82b</externalid>
      <Title>Business Intelligence Manager</Title>
      <Description><![CDATA[<p>As a Business Intelligence Manager, you will play a critical role in building secure, interactive data and AI applications hosted natively on the Databricks platform. You will design, build, and maintain scalable data web applications, AI chatbots, and custom operational interfaces using frameworks like Streamlit, React, and FastAPI. By leveraging Databricks Apps&#39; serverless infrastructure, you will eliminate the need for external hosting and empower business users to make informed decisions by bridging the gap between raw data and solutions using your engineering prowess, Databricks apps, Databricks SQL, Lakebase and AgentBricks.</p>
<p>The Impact You Will Have:</p>
<ul>
<li>Build: You will design and develop robust frontend interfaces and API backends (e.g., FastAPI routing user queries to model-serving endpoints). You will build solutions ranging from data-rich dashboards to enterprise chat solutions powered by the Mosaic AI Agent Framework.</li>
</ul>
<ul>
<li>Architect: You will design secure and scalable application architectures that can suffice GTM requirements on building custom SaaS applications.</li>
</ul>
<ul>
<li>Scale: You will create scalable applications that seamlessly connect to Databricks SQL via the Statement Execution API or Databricks SDK. You will establish CI/CD pipelines using Declarative Automation Bundles (DABs) to automate deployment across development, staging, and production workspaces.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>You have 5+ years of experience working as a Software Engineer, Data App Developer, or Full-Stack Engineer building interactive web applications.</li>
</ul>
<ul>
<li>You are proficient in Python, DBSQL and/or Node.js. Experience with frameworks like Streamlit, Dash, Flask, FastAPI, React, or Express is required.</li>
</ul>
<ul>
<li>You know the Databricks ecosystem. Familiarity with Unity Catalog, Databricks SQL, Databricks SDK for Python, and Model Serving is highly preferred.</li>
</ul>
<ul>
<li>You have built for scale and security. Experience with CI/CD tools, Infrastructure as Code (specifically Databricks Asset Bundles), and implementing secure OAuth flows.</li>
</ul>
<ul>
<li>You are passionate about applying AI. Experience integrating LLMs or Mosaic AI Agent Frameworks into application backends to deliver intelligent chat and RAG solutions.</li>
</ul>
<ul>
<li>You excel in a collaborative environment. You can translate stakeholder requirements into intuitive user interfaces, working through dependencies and troubleshooting deployment errors or telemetry logs.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$158,200-$217,450 USD</Salaryrange>
      <Skills>Python, DBSQL, Node.js, Streamlit, React, FastAPI, Unity Catalog, Databricks SQL, Databricks SDK for Python, Model Serving, CI/CD tools, Infrastructure as Code, OAuth flows, LLMs, Mosaic AI Agent Frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified data intelligence platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8501030002</Applyto>
      <Location>New York; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aebaacf5-640</externalid>
      <Title>Integrations Engineer</Title>
      <Description><![CDATA[<p>You will own the full lifecycle of integrations that power Hebbia&#39;s AI , from designing connectors to deploying them in production, monitoring their behavior, and debugging failures in real time.</p>
<p>You&#39;ll work across systems like Snowflake, S3, SharePoint, and internal customer infrastructure , building pipelines that need to handle real-world complexity: unreliable APIs, evolving schemas, massive datasets, and edge cases that don’t show up in documentation.</p>
<p>This role is hands-on, high-ownership, and deeply technical. You won’t just write code , you’ll develop the instincts to operate and debug complex distributed systems in production.</p>
<p>You will build connectors and ingestion pipelines that bring enterprise data into Hebbia&#39;s AI platform, from Snowflake warehouses and SharePoint libraries to live pricing feeds, high-velocity news data, and proprietary customer systems.</p>
<p>You will design and operate pipelines that handle scale, failures, and edge cases gracefully.</p>
<p>You will debug issues across APIs, auth systems, and data formats, often under real-time customer pressure.</p>
<p>You will own reliability end-to-end: monitoring, alerting, on-call, and incident response.</p>
<p>You will improve internal tooling and observability to make systems more robust and easier to operate.</p>
<p>You will partner with product and customer teams to scope, prioritize, and ship the integrations that unlock Hebbia&#39;s highest-value use cases.</p>
<p>You will design and ship agents that sit on top of the ingestion layer, making enterprise data accessible and actionable across all of Hebbia&#39;s product surfaces , from document analysis to structured query workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $265,000</Salaryrange>
      <Skills>Python, APIs, OAuth flows, webhook patterns, rate limiting, pagination, cloud infrastructure, AWS, Kafka, PostgreSQL, Redis, ElasticSearch, enterprise data platforms, document processing pipelines, content extraction systems, agentic systems, LLM-enabled products, AI tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4675784005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e9511759-d2c</externalid>
      <Title>Forward-Deployed Engineer - API Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Forward-Deployed Engineer to be a hands-on technical partner for our API customers. This is a deeply technical, customer-facing role—equal parts solution architect, developer advocate, and technical program manager.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Serve as the dedicated technical partner for strategic API Platform customers, guiding them through architecture design, integration, and optimization.</li>
<li>Troubleshoot API performance, grounding quality, and integration challenges—diving directly into code and logs when needed.</li>
<li>Prototype example integrations, search workflows, and demos to accelerate adoption and inspire customer innovation.</li>
<li>Collaborate with Engineering and Product to resolve high-priority escalations, ensuring minimal downtime and maximum reliability.</li>
<li>Capture and synthesize customer feedback to improve API features, SDKs, and developer experience.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>5+ years in technical support engineering, software engineering, or solutions engineering, with direct application development experience.</li>
<li>Strong API integration skills—able to design and debug REST/JSON payloads, manage auth flows, and optimize latency.</li>
<li>Proven track record of diagnosing complex technical issues across distributed systems, APIs, and customer environments.</li>
<li>Excellent communicator who can work with both developer teams and non-technical stakeholders.</li>
<li>Bachelor&#39;s degree in Computer Science or equivalent hands-on experience.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$205K – $335K</Salaryrange>
      <Skills>technical support engineering, software engineering, solutions engineering, API integration, REST/JSON payloads, auth flows, latency optimization, complex technical issues, distributed systems, APIs, customer environments, search workflows, demos, customer feedback, API features, SDKs, developer experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Perplexity AI</Employername>
      <Employerlogo>https://logos.yubhub.co/perplexity.com.png</Employerlogo>
      <Employerdescription>Perplexity AI is a company that powers AI-native search, retrieval, and automation for some of the world&apos;s most innovative companies. The API Platform is a core infrastructure layer for AI-powered search and automation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/perplexity/aa511ea8-96e3-42ba-b28f-5e222170bcee</Applyto>
      <Location>New York City, London, San Francisco, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-04</Postedate>
    </job>
  </jobs>
</source>