<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>fa75c770-886</externalid>
      <Title>DCSC Automation Coordinator Intern</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We are seeking a motivated and detail-oriented DCSC Intern to join our team and support the integration of Security Compliance practices into our DCSC DevOps workflows. This internship offers hands-on experience with DCSC, cloud infrastructure, and application security, providing a unique opportunity to learn and contribute to real-world projects in a fast-paced environment.</p>
<p>Key Tasks &amp; Responsibilities</p>
<ul>
<li>Contribute in building automation tools into DCSC pipelines</li>
<li>Assist in identifying, analyzing, and documenting business processes for automation opportunities</li>
<li>Participate in discovery projects to evaluate current workflows and recommend areas for improvement</li>
<li>Support the design, development, and deployment of automated solutions using tools such as RPA (Robotic Process Automation), workflow platforms, and scripting languages</li>
<li>Implement low-code solutions (e.g., Microsoft Power Automate, AppSheet, Mendix, etc.) where appropriate to streamline business processes</li>
<li>Collaborate with cross-functional teams to gather requirements and test automation solutions</li>
<li>Monitor automated processes for performance, troubleshoot issues, and suggest improvements</li>
<li>Prepare process documentation, user guides, and training materials for end-users</li>
</ul>
<p>Qualifications &amp; Experience</p>
<ul>
<li>Pursuing a Bachelor&#39;s or Master&#39;s degree in Cybersecurity, CS, IS, IT, or other related fields of study</li>
<li>Expected/achieved minimum 2:1 classification in a Computer Science, Cybersecurity, Engineering or related degree discipline</li>
<li>Data Visualization and Dashboarding: Expertise in tools such as Power BI, Tableau, R Shiny, or Google Data Studio to develop interactive data dashboards</li>
<li>Data Engineering and ETL: Competence in SQL, Python, or R for data manipulation, retrieval, cleansing, and structuring from various sources (e.g., databases, APIs, Excel, GRC platforms)</li>
<li>Database Fundamentals: A solid grasp of SQL databases (e.g., MySQL, PostgreSQL) for effective data storage and retrieval</li>
<li>Automation: The ability to implement automation solutions, aiming to reduce manual effort by 30-40%, particularly for automating reporting and generating real-time alerts for compliance breaches</li>
<li>AI/ML Application: Knowledge of leveraging AI and Machine Learning techniques for proactive risk management, including anomaly detection, risk scoring, and predictive analysis</li>
</ul>
<p>Experience Level: entry Employment Type: internship Workplace Type: onsite Category: Engineering Industry: Technology Salary Range: Not stated Salary Min: Not stated Salary Max: Not stated Salary Currency: USD Salary Period: Not stated Required Skills:</p>
<ul>
<li>Complex modeling in Data Center Security Compliance Access Administration and periodic access reviews</li>
<li>Contribute testing AI tools and practices in Access management</li>
<li>Gain knowledge in DCSC process development used for Identity Management and access administration</li>
<li>Real-world implementation of DCSC principles in a real production environment</li>
<li>General introduction into the information security world</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Robotic Process Automation</li>
<li>Workflow platforms</li>
<li>Scripting languages</li>
<li>Low-code solutions</li>
<li>Data visualization</li>
<li>Data engineering</li>
<li>ETL</li>
<li>Database fundamentals</li>
<li>Automation</li>
<li>AI/ML application</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Complex modeling in Data Center Security Compliance Access Administration and periodic access reviews, Contribute testing AI tools and practices in Access management, Gain knowledge in DCSC process development used for Identity Management and access administration, Real-world implementation of DCSC principles in a real production environment, General introduction into the information security world, Robotic Process Automation, Workflow platforms, Scripting languages, Low-code solutions, Data visualization, Data engineering, ETL, Database fundamentals, Automation, AI/ML application</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online. It runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7751595?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-26</Postedate>
    </job>
    <job>
      <externalid>2738c24d-82c</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences. Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better.  The volume and velocity of data at Intercom are exploding, fueled by our growth and our drive to integrate more sophisticated, AI-assisted data solutions. The Data Engineering team is the critical engine powering Intercom&#39;s future. We are responsible for building and maintaining the distributed foundations that transform raw information into actionable intelligence, empowering all Intercom teams, from Product to Research.  We are looking for a seasoned Senior Data Engineering Manager to take ownership of key data initiatives and drive these efforts forward.  This role is about impact and ownership. You will lead a team at the forefront of designing and evolving the core infrastructure that powers our entire data ecosystem.  - Next-Gen Platform Evolution: Partner with product and business teams and lead the architectural design and implementation of the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications. - Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build the self-service tooling and infrastructure they need to move fast and deploy safely. - Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment (where dozens of changes can ship daily). - Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.  To put things in context, here are examples of the strategic, company-shaping initiatives you will be expected to own and drive as a Senior Data Engineering Manager:  - GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business focused internal software. - Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible, users should be able to query information,getting both raw numbers and contextual narratives instantly,without needing a data science degree or waiting on analysts. - Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure and to ensure it can scale exponentially for future growth and advanced use cases.  Recent Wins You&#39;ll Build Upon:  - AI-assisted Local Analytics Development Environment for Airflow and DBT. - Data-rich AI apps containerized on Snowflake SPCS. - A new, modern data catalog solution - Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.  You are a leader, a builder, and a problem-solver who thrives on solving real-world business problems. MessageLookupI&#39;m happy to continue the response.  {   &quot;description&quot;: &quot;You are a leader, a builder, and a problem-solver who thrives on solving real-world business problems.  The Essentials:  - 7+ Years Experience: You have a proven, full-time career history in the data space, leading teams of 6+ Engineers. - Stakeholder Focus: You can communicate complex technical solutions to a business-focused audience and vice versa. You are comfortable interacting with stakeholders across the entire breadth of the business. - Technical Depth: Your team will be responsible for the majority of execution, but you&#39;re not afraid to get your hands dirty and write code when it&#39;s needed. You lead from the front. - A Leader &amp; Mentor: You naturally recognize opportunities to step back and mentor others, understanding when your guidance will multiply the team&#39;s output.  Bonus Points (Our Modern Stack Knowledge):  - Airflow at Scale: Extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment. - Modern Data Stack Fluency: Familiarity with tools like Snowflake and DBT. - Future-Focused: You keep a keen eye on industry trends and emerging technologies, always thinking about what&#39;s next.  Next Steps  If you are passionate about designing resilient analytics infrastructure that scales with a high-growth, global product, we encourage you to apply!  ## Benefits   We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!   - Competitive salary and equity in a fast-growing start-up - We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen - Regular compensation reviews - we reward great work! - Pension scheme &amp; match up to 4% - Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents - Open vacation policy and flexible holidays so you can take time off when you need it - Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones - If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too  - MacBooks are our standard, but we also offer Windows for certain roles when needed.   #LI-Hybrid  Policies   Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home. We expect employees to be in the office at least three days per week.  We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone. As an organization, our policy is to not advocate on behalf of the company or our employees on any social or political topics out of our internal or external communications. We respect personal opinion and expression on these topics on personal social platforms on personal time, and do not challenge or confront anyone for their views on non-work-related topics. Our goal is to focus on doing incredible work to achieve our goals and unite the company through our core values.&quot; }</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Snowflake, DBT, Apache Airflow, Data Engineering, Data Science, Data Analysis, Data Visualization, SQL, Python, Cloud Computing, Big Data, Machine Learning, Data Mining, Data Warehousing, ETL, Data Governance, Data Security, Data Architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company founded in 2011, trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574762?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>9ca997fb-218</externalid>
      <Title>Quantitative Developer</Title>
      <Description><![CDATA[<p>We are building a world-class systematic data platform that will power the next generation of our systematic portfolio engines.</p>
<p>The systematic data group is looking for a Quantitative Developer to join our growing team. The team consists of content specialists, data scientists, engineers, and quant developers who are responsible for discovering, maintaining, and analysing sources of alpha for our portfolio managers.</p>
<p>The role builds on individual&#39;s knowledge and skills in four key areas of quantitative investing: data, statistics, technology, and financial markets.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Use finance knowledge and statistical knowledge to analyse potential alpha sources and present findings to portfolio managers and quantitative analysts.</li>
<li>Build quant tools to help portfolio managers research, evaluate, combine alphas, and understand risks.</li>
<li>Design and maintain tools to evaluate and monitor data quality and integrity for a wide variety of data sources.</li>
<li>Engage with vendors, brokers, and perform analytics to understand characteristics of datasets.</li>
<li>Interact with portfolio managers and quantitative analysts to understand their use cases and recommend datasets to help maximise their profitability.</li>
</ul>
<p>Skills Required:</p>
<ul>
<li>3+ years of work experience as a financial engineer, data scientist, or quant developer.</li>
<li>Strong knowledge of Python and/or C++, Java, C#.</li>
<li>Familiarity with data pipeline engineering, ETL for large datasets, and scheduling tools like Airflow.</li>
<li>Strong SQL and database experience including PL-SQL or T-SQL.</li>
<li>Understanding of typical software development lifecycle and familiarity with: Linux, GitHub, CI/CD.</li>
<li>Ph.D. or Masters in computer science, mathematics, statistics, or other field requiring quantitative analysis.</li>
</ul>
<p>Beneficial Skills and Experience:</p>
<ul>
<li>Understanding of risk models and performance attribution.</li>
<li>Experience with financial markets such as equities and futures.</li>
<li>Knowledge of statistical techniques and their usage.</li>
</ul>
<p>The estimated base salary range for this position is $165,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000 to $250,000</Salaryrange>
      <Skills>Python, C++, Java, C#, data pipeline engineering, ETL, Airflow, SQL, database, Linux, GitHub, CI/CD, Ph.D., Masters</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides systematic data platforms for portfolio engines.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755952876477?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f1a5b85-116</externalid>
      <Title>Mission Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Mission Software Engineer to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work directly with customers to understand their problems and translate those into features in Scale&#39;s platform.</li>
<li>Be open to &gt;50% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites.</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus.</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc.</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles.</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval.</p>
<p>Benefits:</p>
<ul>
<li>Comprehensive health, dental and vision coverage,</li>
<li>Retirement benefits,</li>
<li>A learning and development stipend,</li>
<li>Generous PTO,</li>
<li>Commuter stipend</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$138,000-$292,560 USD</Salaryrange>
      <Skills>Python, JavaScript, Node, React, Cloud-Native Technologies, Linux, Networking, Data Engineering, ETL, Data Modeling, Data Warehousing, Data Governance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4481921005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Boston, Massachusetts ; Honolulu, HI; San Diego, CA; San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65e23a71-601</externalid>
      <Title>Senior Data Scientist, Analytics</Title>
      <Description><![CDATA[<p>We are seeking a Senior Data Scientist to join our Data Science &amp; Analytics team. As a Senior Data Scientist, you will help us make it easier and more fun for people to talk and hang out before, during, and after playing games.</p>
<p>Responsibilities: Partner with teams throughout Discord through the full lifecycle of data science analytics from ideation and exploratory analysis, to building dashboards and reports, and A/B testing. Define KPIs and metrics that help improve the user experience, encapsulating these measures in clean crisp dashboards that provide the company with timely and actionable information. Use our amazing infrastructure to quickly and easily build custom data sets to monitor novel product features and processes. Proactively socialize insights, dashboards, and reports with technical and non-technical audiences, soliciting feedback on where to improve. Be a champion of A/B testing and help groups throughout the company design, analyze, and interpret A/B tests correctly. Collaborate with data and engineering teams to design scalable and future-proof instrumentation.</p>
<p>Requirements: 4+ years of experience autonomously translating ambiguous business problems into deep informative insights through hands-on analytics. 4+ years of experience building performant dashboards using Tableau, Looker, or similar software, with proficiency in designing clean crisp visualizations. 4+ years experience writing excellent SQL. Excellent communication skills, with the ability to translate complicated findings or technical approaches in easy-to-understand ways. 4+ years of experience in the design, analysis, and interpretation of A/B tests in a large data environment. A desire to work with amazing, passionate people who care deeply about solving challenging problems to improve Discord. Last but not least, a collaborative attitude and a healthy dose of natural curiosity!</p>
<p>Bonus Points: Passion for Discord or online communities. Experience with technical leadership, being the point person for one or more stakeholder groups. Experience with analytics for social media or international subscription-based online services, including familiarity with concepts such as social graphs, LTV analysis, funnel analysis, etc. Experience in the user engagement and growth problems. Experience writing performant code in BigQuery SQL. Experience with writing production ETL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>Tableau, Looker, SQL, A/B testing, Data analysis, Data visualization, BigQuery SQL, ETL, Social media analytics, International subscription-based online services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform for communicating and interacting with others through voice, video, and text. It has over 200 million monthly active users.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8468440002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21b40571-b50</externalid>
      <Title>Account Executive, Commercial</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, we’ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>As an Account Executive, you will be responsible for managing our commercial customer base. The ideal person will be a proactive and curious member of our growing Sales team, identifying new business with prospects and growth opportunities for clients. A certain level of foresight and knowledge working with intrinsic sales cycles will take this individual confidently into the future of dbt Labs.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Build, manage and close your own pipeline of companies that you believe will benefit from the dbt Cloud offering</li>
<li>Manage, and deepen the dbt Cloud footprint in existing accounts, optimizing our impact on these companies</li>
<li>Engage with technology partners and ecosystem service providers to optimize our impact and reach in the region</li>
<li>Lead and contribute to team projects that develop our sales process</li>
<li>Work with product to build and maintain the dbt Cloud enterprise roadmap</li>
</ul>
<p>We’re looking for someone who has:</p>
<ul>
<li>2+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
<li>A passion for being an inclusive teammate and involved member of the community</li>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>You have an edge if you have:</p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
<li>Knowledge of or prior experience with dbt</li>
<li>Prior experience of selling into the Nordics</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process.</p>
<p>The typical starting salary range for this role is: €124,000 EUR to €150,000 EUR with growth into the €170,000&#39;s</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>€124,000 EUR to €150,000 EUR with growth into the €170,000&apos;s</Salaryrange>
      <Skills>technology sales, complex technical concepts, consultative sale, SQL, ETL, BI, open-sourced software, analytics, data engineering, sales cycle management</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4663371005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7b478be3-c4b</externalid>
      <Title>Majors Sales Director (Boston)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Majors Sales Director to join our Revenue Team. As a key member of our growing Sales team, you will be responsible for building out our strategic customer base throughout the Northeast region. Your role will involve owning the full sales cycle from lead to ongoing utilization for enterprise prospects, organizing POC implementations of dbt Cloud Enterprise, and winning new dbt Cloud Enterprise customers per year.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the full sales cycle from lead to ongoing utilization for enterprise prospects</li>
</ul>
<ul>
<li>Organizing POC implementations of dbt Cloud Enterprise</li>
</ul>
<ul>
<li>Winning 20 new dbt Cloud Enterprise customers per year (after ramp)</li>
</ul>
<ul>
<li>Leading and contributing to team projects that develop our sales process</li>
</ul>
<ul>
<li>Working with product to build and maintain the dbt Cloud enterprise roadmap</li>
</ul>
<ul>
<li>Becoming an expert in SQL, dbt, and enterprise data operations</li>
</ul>
<ul>
<li>Being an active member of the dbt open source community</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>7+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
</ul>
<ul>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
</ul>
<ul>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
</ul>
<ul>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
</ul>
<ul>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
</ul>
<ul>
<li>A passion for being an inclusive teammate and involved member of the community</li>
</ul>
<ul>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>Prior experience in analytics, ETL, BI, and/or open-sourced software is a plus, as well as knowledge of or prior experience with dbt.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$300,000-$380,000 USD</Salaryrange>
      <Skills>SQL, dbt, enterprise data operations, sales, consultative sales, analytics, ETL, BI, open-sourced software</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform that helps data teams transform raw data into reliable, actionable insights, serving over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4632327005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US East - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce7ea64c-436</externalid>
      <Title>Enterprise Sales Director (North Central)</Title>
      <Description><![CDATA[<p>We&#39;re looking for an Enterprise Sales Director to join our Revenue Team. As a key member of our growing Sales team, you will be responsible for building out our enterprise customer base within the North Central region of the US. With a proven track record of exceeding annual targets, you will identify new business with prospects and growth opportunities for clients. Your ability to understand complex technical concepts and develop them into a consultative sale will be essential in engaging stakeholders at all levels of an analytics organization.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the full sales cycle from lead to ongoing utilization for enterprise prospects</li>
<li>Organize POC implementations of dbt Cloud Enterprise</li>
<li>Win 20 new dbt Cloud Enterprise customers per year (after ramp)</li>
<li>Lead and contribute to team projects that develop our sales process</li>
<li>Work with product to build and maintain the dbt Cloud enterprise roadmap</li>
<li>Become an expert in SQL, dbt, and enterprise data operations</li>
<li>Be an active member of the dbt open source community</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization</li>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
<li>A passion for being an inclusive teammate and involved member of the community</li>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
<li>Knowledge of or prior experience with dbt</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
<li>401k plan with 3% guaranteed company contribution</li>
<li>Comprehensive healthcare coverage</li>
<li>Generous paid parental leave</li>
<li>Flexible stipends for:</li>
<li>Health &amp; Wellness</li>
<li>Home Office Setup</li>
<li>Cell Phone &amp; Internet</li>
<li>Learning &amp; Development</li>
<li>Office Space</li>
</ul>
<p>Compensation:</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process. In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<p>Sales Director OTE Range $238,000-$320,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$238,000-$320,000 USD</Salaryrange>
      <Skills>technology sales, SQL, dbt, enterprise data operations, complex technical concepts, consultative sale, analytics, ETL, BI, open-sourced software</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4651705005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US Central - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>328a534b-bac</externalid>
      <Title>Customer Sales Director (Austin, TX)</Title>
      <Description><![CDATA[<p>We are looking for a Customer Sales Director to focus on an at-scale strategy to support, retain, and grow a mix of our Commercial and Enterprise customer base. This role is a hybrid-based role in Austin, Texas.</p>
<p>The ideal candidate will have 4+ years of experience in SaaS sales or account management, with a proven track record of exceeding targets. They will be able to build a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts, manage multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts</li>
<li>Managing multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas</li>
<li>Protecting renewals by monitoring account signals, deepening executive alignment, and helping customers realize consistent value</li>
</ul>
<p>The successful candidate will have strong consultative selling skills, engaging effectively with both technical and business audiences. They will be proactive and organized, capable of independently managing a diverse book of business.</p>
<p>Preferred qualifications include prior experience in analytics, ETL, BI, or open-source software, familiarity with dbt (core or Cloud) and the modern data stack, including platforms like Snowflake, BigQuery, Redshift, or Databricks, experience with consumption and/or usage-based pricing structures, and experience with the MEDD(P)ICC sales methodology / Command of the Message.</p>
<p>Benefits include unlimited vacation time, 401k plan with 3% guaranteed company contribution, comprehensive healthcare coverage, generous paid parental leave, flexible stipends for health &amp; wellness, home office setup, cell phone &amp; internet, learning &amp; development, and office space.</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS sales, account management, analytics, ETL, BI, open-source software, dbt, Snowflake, BigQuery, Redshift, Databricks, consumption and/or usage-based pricing structures, MEDD(P)ICC sales methodology / Command of the Message, prior experience in analytics, familiarity with dbt (core or Cloud), experience with consumption and/or usage-based pricing structures</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer in analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4616931005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b492f9ba-bb6</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for an Enterprise Account Executive to grow and manage our enterprise customer base in DACH. As a proactive and curious member of our growing Sales team, you will identify new business with prospects and growth opportunities for clients.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Build, manage and close your own pipeline of companies that you believe will benefit from the dbt Cloud offering</li>
<li>Manage, and deepen the dbt Cloud footprint in existing accounts, optimizing our impact on these companies</li>
<li>Engage with technology partners and ecosystem service providers to optimize our impact and reach in the region</li>
<li>Lead and contribute to team projects that develop our sales process</li>
<li>Work with product to build and maintain the dbt Cloud enterprise roadmap</li>
</ul>
<p>We&#39;re looking for someone who has:</p>
<ul>
<li>Demonstrable ability of building and closing your own pipeline within enterprise accounts</li>
<li>4+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
<li>A passion for being an inclusive teammate and involved member of the community</li>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>You have an edge if you have:</p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
<li>Knowledge of or prior experience with dbt</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, technology sales, consultative sale, communication skills, sales cycles, analytics organization, analytics, ETL, BI, open-sourced software, dbt</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4668374005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Germany- Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1be89b3c-bc1</externalid>
      <Title>Staff Analytics Engineer</Title>
      <Description><![CDATA[<p>We are currently hiring for multiple teams:</p>
<p>Foundational Data team: Our mission in the Foundational Data team is to build and maintain high-quality datasets frequently used across all of Airbnb. We set company-wide standards that decide how locations are grouped into regions, visitors are measured based upon site traffic, bot traffic is separated from organic traffic, and cloud costs are attributed to Airbnb services. This data is used to build public financial reports, drive strategic marketing decisions, and manage operational costs.</p>
<p>AirCover Data Foundation: The AirCover Data Foundation team is responsible for providing trustworthy, consistent data and metrics to facilitate business insights, informed decision-making, and seamless operations across Airbnb&#39;s AirCover programs, such as Guest Travel Insurance, AirCover for Hosts, and AirCover for Guests.</p>
<p>As a Staff Analytics Engineer, you will bring a unique lens to our data strategy and provide in-depth technical mentorship and leadership to the team. We are looking for someone with expertise in data modeling, metric development, and large-scale distributed data processing frameworks like Presto or Spark.</p>
<p>Leveraging our internal, top-tier data tooling alongside other resources, you will empower both technical and non-technical teams across Airbnb to utilize our data for making decisions grounded in evidence. Staff-level engineers are expected to do this with a minimal amount of supervision. We value innovative thinkers who consistently seek smarter and more efficient solutions while managing daily operations, deadlines, and collaborating with team members.</p>
<p>A Typical Day:</p>
<ul>
<li>Develop high-quality data assets to satisfy a wide range of use-cases</li>
<li>Develop frameworks and tools to scale insight generation to meet critical business and infrastructure requirements</li>
<li>Collaborate and build strong partnerships with other data practitioners throughout Airbnb</li>
<li>Influence the trajectory of data in decision making</li>
<li>Improve trust in our data by championing for data quality across the stack</li>
</ul>
<p>Your Expertise:</p>
<ul>
<li>9+ years of experience with a BS/Masters or 6+ years with a PhD</li>
<li>Fluent in SQL and proficient in at least one data engineering language, such as Python or Scala</li>
<li>Expertise using business intelligence and reporting tools like Superset and Tableau</li>
<li>Expertise in large-scale distributed data processing frameworks like Presto or Spark</li>
<li>Expertise in data modeling for data warehouses and/or metrics repositories</li>
<li>Experience with an ETL framework like Airflow</li>
<li>Clear and mature communication skills: ability to distill complex ideas for technical and non-technical stakeholders</li>
<li>Ability to provide technical leadership and mentorship, guiding teams on best practices and contributing to the development of analytic engineering strategies</li>
<li>Experience exploring and leveraging LLM AI’s in everyday tasks (coding, documentation, etc…)</li>
<li>Strong capability to forge trusted partnerships across working teams</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Scaling data tasks via automation</li>
<li>Previous experience in large-scale cloud-based software engineering or system architecture</li>
<li>Experience with AB experimentation</li>
<li>Familiarity with AI/ML algorithms, including their dependencies on data, as well as their respective strengths and limitations</li>
<li>Designing and/or leveraging high-quality data visualization tools</li>
</ul>
<p>Your Location: This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager. While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity. Click here for the up-to-date list of excluded states.</p>
<p>Our Commitment To Inclusion &amp; Belonging: Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply. We strive to also provide a disability inclusive application and interview process. If you are a candidate with a disability and require reasonable accommodation in order to submit an application, please contact us at: reasonableaccommodations@airbnb.com.</p>
<p>How We&#39;ll Take Care of You: Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits. Pay Range $194,000-$240,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$194,000-$240,000 USD</Salaryrange>
      <Skills>SQL, Python, Scala, Presto, Spark, Superset, Tableau, ETL, Airflow, Data Modeling, Data Warehousing, Metrics Repositories, LLM AI, AI/ML Algorithms, Data Visualization, Cloud-Based Software Engineering, System Architecture, AB Experimentation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most well-known travel companies in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7733495?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a3dc5a7-8d9</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to support the Enterprise by building reliable, well-modeled, and trusted data for reporting, decision-making, and emerging AI use cases.</p>
<p>As a Senior Analytics Engineer, you will design scalable data models, define consistent business logic, and help establish a strong semantic foundation that enables both human analytics and machine-driven intelligence.</p>
<p>You will partner closely with Finance, People and Company Operations stakeholders, Data Analysts, and Data Engineers to ensure data is accurate, consistent, and easy to consume; whether through dashboards, self-service exploration, or AI-powered workflows.</p>
<p>Responsibilities:</p>
<p>Data Modeling &amp; Semantics</p>
<ul>
<li>Design, build, and maintain scalable data models using dbt and Snowflake</li>
<li>Define and standardize core Finance, HR and Enterprise level metrics (e.g., revenue, ARR, billing, Attrition, Executive Insights, Security) with clear, governed logic</li>
<li>Establish consistent modeling patterns, naming conventions, and semantic clarity across datasets</li>
<li>Contribute to a shared semantic layer that supports both analytics and AI use cases</li>
</ul>
<p>AI-Ready Data &amp; Snowflake Ecosystem</p>
<ul>
<li>Prepare high-quality, well-governed datasets for use with Snowflake Cortex and Snowflake Intelligence</li>
<li>Enable structured data foundations that support LLM-powered use cases, semantic querying, and intelligent applications</li>
<li>Ensure data is context-rich, well-documented, and aligned with business meaning to improve AI accuracy and trust</li>
</ul>
<p>Data Quality, Governance &amp; Trust</p>
<ul>
<li>Implement robust testing, validation, and documentation practices in dbt</li>
<li>Ensure consistency across reports and dashboards through shared definitions and reusable models</li>
<li>Apply data governance best practices, including access controls, lineage, and auditability</li>
<li>Partner across teams to establish clear ownership and accountability for data assets</li>
</ul>
<p>Collaboration &amp; Delivery</p>
<ul>
<li>Partner with Finance, Analysts, and cross-functional stakeholders to translate business needs into data solutions</li>
<li>Support self-service analytics by building intuitive, reusable datasets</li>
<li>Contribute to scalable data workflows that balance immediate business needs with long-term maintainability</li>
<li>Work within an agile environment, contributing to planning, prioritization, and continuous improvement</li>
</ul>
<p>AI and Data Mindset</p>
<ul>
<li>Demonstrate an AI-first mindset, thinking beyond data models and dashboards to how data can power intelligent systems and decision-making</li>
<li>Understand the importance of well-modeled, well-documented, and semantically clear data for AI and LLM-based use cases</li>
<li>A level of comfort leveraging AI-assisted workflows to improve productivity, code quality, and consistency</li>
<li>Curiosity for emerging capabilities in platforms like Snowflake Cortex and Snowflake Intelligence, and how they can be applied to Enterprise analytics</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5–8+ years of experience in Analytics Engineering, Data Engineering, or similar roles</li>
<li>Strong SQL skills and experience building analytics-ready data models</li>
<li>Mentorship &amp; Engineering Excellence: Mentorship, raising the technical bar, establishing organization-wide standards for dbt/SQL quality and CI/CD</li>
<li>Hands-on experience with dbt and Snowflake or other ETL, Modeling and database platforms</li>
<li>Solid understanding of data modeling principles, including dimensional modeling and semantic design</li>
<li>Ability to navigate highly ambiguous business challenges, translating vague, complex, or competing goals from executive stakeholders into clear, actionable, and robust data solutions</li>
<li>Experience translating business requirements into clear, maintainable data logic</li>
<li>Familiarity with SaaS metrics and Finance and People data (e.g., ARR, revenue recognition, billing, attrition etc.)</li>
<li>Experience with data quality, testing, and documentation best practices</li>
<li>Exposure to Python, R, or data processing frameworks (e.g., PySpark) is a plus</li>
<li>Experience with BI tools such as Tableau or Looker</li>
<li>Strong communication skills and ability to work across technical and business teams</li>
</ul>
<p>What you can look forward to as an Okta employee!</p>
<ul>
<li>Amazing Benefits</li>
<li>Making Social Impact</li>
<li>Fostering Diversity, Equity, Inclusion and Belonging at Okta</li>
<li>Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, Snowflake, SQL, data modeling, dimensional modeling, semantic design, ETL, data quality, testing, documentation, Python, R, PySpark, Tableau, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7818510?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c53ecdd3-dc7</externalid>
      <Title>Scale Solution Engineer</Title>
      <Description><![CDATA[<p>As a Scale Solution Engineer at Databricks, you will play a critical role in advising customers during their onboarding process. You will work directly with customers to help them onboard and deploy Databricks in their production environment.</p>
<p>Your impact will be significant, ensuring new customers have an excellent experience by providing technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions. You will also work directly with multiple customers concurrently to provide technical solutions.</p>
<p>To succeed in this role, you will need:</p>
<ul>
<li>An undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>1+ years experience in a technical role, preferably in the data or cloud field</li>
<li>Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP</li>
<li>Knowledge of a programming language such as Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow), Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent time management and prioritization skills</li>
<li>Excellent written and verbal communication</li>
</ul>
<p>Bonus: Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>public cloud platforms, AWS, Azure, GCP, Python, Scala, SQL, Data Engineering technologies, ETL, DBT, Spark, Airflow, Data Warehousing technologies, Stored Procedures, Redshift, Snowflake, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8408817002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ce3bb01-4a1</externalid>
      <Title>Scale Solutions Engineer</Title>
      <Description><![CDATA[<p>At Databricks, we aim to empower our customers to solve the world&#39;s most challenging data problems using the Data Intelligence platform. As a Scale Solution Engineer, you will be critical in advising customers during their onboarding. You will work directly with customers to help them onboard and deploy Databricks in their production environment and accelerate Databricks features adoption.</p>
<p>The impact you will have:</p>
<ul>
<li>Ensure new customers have an excellent experience by providing technical assistance early in their journey</li>
<li>Become an expert on the Databricks Platform and guide customers in making the best technical decisions</li>
<li>Work directly with multiple customers concurrently to provide technical solutions</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>3+ years experience in a customer-facing technical role in pre-sales, professional services, consulting or customer success</li>
<li>Experience in one or more of the following:</li>
</ul>
<ul>
<li>Solid understanding of the end-to-end data analytics workflow</li>
<li>Excellent time management and prioritization skills</li>
<li>Knowledge of public cloud platforms AWS, Azure or GCP would be a plus</li>
<li>Knowledge of a programming language - Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow)</li>
<li>Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent written and verbal communication, in English and Portuguese</li>
<li>Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models).</li>
<li>Databricks certification(s)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks, Data Engineering, Data Warehousing, Python, Scala, SQL, AWS, Azure, GCP, ETL, DBT, Spark, Airflow, Redshift, Snowflake, English, Portuguese, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8391865002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>89603aee-e7d</externalid>
      <Title>Senior Data Analyst, Product Analytics (Fixed Term 10 Months)</Title>
      <Description><![CDATA[<p>This role is offered for a 10-month, fixed-term duration.</p>
<p>As a Senior Data Analyst on the Product Analytics team, you will be a strategic thought partner with substantial autonomy, shaping how we use data to build products that improve the lives of our members. You will work side-by-side with product managers, engineers, clinicians, and business leaders to uncover insights, influence roadmaps, and measure impact.</p>
<p>The Product Analytics team develops the reports, tools, and insights that drive product decisions across Omada. Using data from app engagement, clinical outcomes, behavioral health indicators, and claims, you will help product and engineering teams identify and solve the most important challenges facing our customers and members. In this role, you will own high-visibility analytics workstreams end-to-end,from defining the problem to communicating recommendations to executives.</p>
<p>Key job responsibilities include:</p>
<ul>
<li>Own the data architecture and data mapping strategy that powers our product reporting, ensuring teams can self-serve reliable insights.</li>
<li>Deliver key performance insights through regular and ad-hoc analyses that guide investment decisions and the evolution of Omada programs.</li>
<li>Lead product experimentation, including test design, sample size determination, A/B testing, and clear communication of results and trade-offs.</li>
<li>Design, build, and maintain intuitive Amplitude and Tableau dashboards (or similar BI tools) that become the source of truth for product teams.</li>
<li>Partner closely with data engineering and other data teams to improve data quality, standardize definitions, and scale reporting across the organization.</li>
<li>Define and refine operational processes, and support system enhancements that make analytics and reporting more efficient and impactful.</li>
<li>Collaborate across analytics, data science, engineering, clinical, and product teams to drive aligned, data-informed decisions.</li>
<li>Explore and integrate AI-driven approaches into product analytics use cases to level up how we generate insights and automate decision support.</li>
</ul>
<p>About you:</p>
<ul>
<li>The ideal candidate should have at least 2+ years of deep expertise in product or mobile analytics, including user behavior analysis, ideally 7+ years of experience.</li>
<li>Expert SQL skills with strong experience in designing and developing new queries and reports.</li>
<li>Experience with A/B testing and statistical measurement.</li>
<li>Ability to analyze various data sets, and develop succinct key metrics.</li>
<li>Strong relational database design/development skills.</li>
<li>Excellent problem-solving, interpersonal, and communication skills.</li>
<li>Track record of working on multiple projects under aggressive deadlines.</li>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or Mathematics.</li>
<li>Skills: Data Analysis and Visualization, Amplitude, Tableau, SQL, ETL, Databases, Data model, and Excel.</li>
</ul>
<p>Your impact:</p>
<ul>
<li>Your work will directly shape how Omada designs, launches, and iterates on member experiences. You will identify new ways to drive engagement and clinical outcomes, support new products and features, and demonstrate the value of existing innovations.</li>
</ul>
<p>You will love this job if you are:</p>
<ul>
<li>A highly motivated, self-driven professional who thrives with autonomy and ownership.</li>
<li>Excited to work closely with product managers, engineers, clinicians, and business stakeholders to tell compelling stories through data.</li>
<li>Energized by solving complex data challenges and building scalable solutions that others rely on every day.</li>
</ul>
<p>Bonus points for:</p>
<ul>
<li>Knowledge of additional coding languages (Python or R).</li>
<li>Advanced experience in data visualization.</li>
<li>Background in healthcare and experience with PHI.</li>
<li>Knowledge of population health metrics.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary.</li>
<li>Remote-first work-from-home culture.</li>
<li>Flexible Time Off to help you rest, recharge, and connect with loved ones.</li>
<li>Health, dental, and vision insurance (and above-market employer contributions).</li>
<li>...and more!</li>
</ul>
<p>It takes a village to change healthcare. As we build together toward our mission, we strive to embody the following values in our day-to-day work. We hope these hold meaning for you as well as you consider Omada!</p>
<ul>
<li>Start with Trust. We listen closely and we operate with kindness. We provide respectful and candid feedback to each other.</li>
<li>Seek Context. We ask to understand and we build connections. We do our research up front to move faster down the road.</li>
<li>Act Boldly. We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.</li>
<li>Deliver Results. We reward impact above output. We set a high bar, we&#39;re not afraid to fail, and we take pride in our work.</li>
<li>Succeed Together. We prioritize Omada&#39;s progress above team or individual. We have fun as we get stuff done, and we celebrate together.</li>
<li>Remember Why We&#39;re Here. We push through the challenges of changing healthcare because we know the destination is worth it.</li>
</ul>
<p>About Omada Health:</p>
<p>Omada Health is a between-visit healthcare provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions. Omada&#39;s multi-condition platform treats diabetes, hypertension, prediabetes, musculoskeletal, and GLP-1 management. With insights from connected devices and AI-supported tools, Omada care teams deliver care that is rooted in evidence and unique to every member, unlocking results at scale. With more than a decade of experience and data, and 29 peer-reviewed publications showcasing clinical and economic proof points, Omada&#39;s approach is designed to improve health outcomes and contain costs. Our customers include health plans, pharmacy benefit managers, health systems, and employers ranging from small businesses to Fortune 500s. At Omada, we aim to inspire and empower people to make lasting health changes on their own terms.</p>
<p>For more information, visit: <a href="https://www.omadahealth.com/">https://www.omadahealth.com/</a></p>
<p>Omada is thrilled to share that we&#39;ve been certified as a Great Place to Work! Please click here for more information.</p>
<p>We carefully hire the best talent we can find, which means actively seeking diversity of beliefs, backgrounds, education, and ways of thinking. We strive to build an inclusive culture where differences are celebrated and leveraged to inform better design and business decisions. Omada is proud to be an equal opportunity workplace and affirmative action employer. We are committed to equal opportunity regardless of race, color, religion, sex, gender identity, national origin, ancestry, citizenship, age, physical or mental disability, legally protected medical condition, family care status, military or veteran status, marital status, domestic partner status, sexual orientation, or any other basis protected by local, state, or federal laws.</p>
<p>Below is a summary of salary ranges for this role in the following geographies:</p>
<ul>
<li>California, New York</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Analysis and Visualization, Amplitude, Tableau, SQL, ETL, Databases, Data model, Excel</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a between-visit healthcare provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7800367?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dcb44b1b-ec9</externalid>
      <Title>Senior Data Analyst</Title>
      <Description><![CDATA[<p>At Neighbor, our vision is to bring communities together by solving our neighbors&#39; biggest challenges.</p>
<p>We&#39;re building the largest hyperlocal marketplace the world has seen. Our marketplace is already flourishing in all 50 states and we&#39;re just getting started!</p>
<p>As a Senior Data Analyst, you are an independent driver and architect, entrusted with solving Neighbor&#39;s most ambitious, complex, and ambiguous problems. You will leverage advanced data analytics to not only execute the data strategy, but also anticipate the challenges that lie ahead. You will bridge the gap between executive vision and technical implementation, pioneering robust data models and pipelines while steering executive-level data strategy and fundamentally shaping the company&#39;s data-driven business decisions with your insights.</p>
<p><strong>Primary Responsibilities</strong></p>
<ul>
<li>Help build a world-class Data &amp; Analytics team focused on data integrity, business intelligence, data-driven decisions, testing, accountability and research</li>
<li>Lead the design and implementation of our data modeling layers (Data Lakes/Warehouses), ensuring a &#39;single source of truth&#39; for a complex, two-sided marketplace</li>
<li>Beyond SQL, you will build and maintain robust ETL/ELT pipelines and deploy predictive models to forecast and automate insights</li>
<li>Mentor junior team members and empower non-technical stakeholders to make autonomous, data-informed decisions through world-class BI tooling</li>
<li>Move beyond &#39;what happened&#39; to &#39;what will happen.&#39; Develop statistical frameworks to test hypotheses and measure the impact of new product features</li>
<li>Act as the primary data partner for Product, Marketing, Sales, Engineering, Finance, and Customer Success leadership</li>
<li>Become an expert on all aspects of Neighbor&#39;s product, user, marketing and other data</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in quantitative and/or technical fields (Math, Physics, Statistics, Economics, Computer Science, Engineering, etc.) OR 7 years working experience in data analytics</li>
<li>5+ years of prior experience as a Data Analyst</li>
<li>3+ years of experience solving complex, ambiguous problems independently and presenting results to stakeholders</li>
<li>Experience with ETL such as data manipulation, organization and cleaning</li>
<li>Experience modeling data lakes or data warehouses</li>
<li>Experience with predictive modeling</li>
<li>Advanced proficiency in reporting and visualization software</li>
<li>Experience using mathematical, scientific and statistical techniques to analyze data</li>
<li>Proven track record of using quantitative analysis to solve problems, and drive key business decisions</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>About Neighbor: Neighbor is the largest and most comprehensive marketplace for self storage and parking, with listings in almost every U.S. city. From storage facilities to neighborhood garages, driveways, and RV spots, Neighbor brings every option together in one simple search.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL, data modeling, predictive modeling, reporting and visualization software, mathematical and statistical techniques, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Neighbor</Employername>
      <Employerlogo>https://logos.yubhub.co/neighbor.com.png</Employerlogo>
      <Employerdescription>Neighbor is a marketplace for self storage and parking, operating in almost every U.S. city.</Employerdescription>
      <Employerwebsite>https://www.neighbor.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/neighbor/034252c8-d93c-40a0-95dc-2830273acba0?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>U.S.</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9a39254a-08b</externalid>
      <Title>Finance Manager</Title>
      <Description><![CDATA[<p>We are seeking a Finance Manager to join our Statutory Financial Reporting and Compliance team in London. The role will be responsible for aspects of the controllership and accounting function in support of our EMEA operations. You will work closely with senior management to lead and drive various meetings on key financial statement areas and own the monthly, quarterly, and annual closes for the EMEA operations.</p>
<p>Your responsibilities will include driving standardization and continuous improvement of processes and controls to obtain the highest integrity of financial reporting, including partnering with stakeholders for a successful end-to-end approach to reporting. You will also deliver functional projects on a cross-country level, such as implementing best accounting/controllership practice to enhance and deepen the control environment.</p>
<p>As a Finance Manager, you will ensure that our EMEA Accounting activities are aligned across the Worldwide Accounting Organization and influence and articulate accounting/finance terminology to the non-finance community. You will also support internal and external audit processes.</p>
<p>To succeed in this role, you will need a Bachelor&#39;s degree in engineering, statistics, or business, or a Bachelor&#39;s degree and experience in a quantitative role. You will also need experience in tax, finance, or a related analytical field, as well as experience in multiple finance and accounting roles.</p>
<p>Preferred qualifications include knowledge of SQL/ETL, experience in identifying incomplete or inaccurate data, and experience in solving complex business challenges by delivering accurate and timely financial models, analysis, and recommendations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, ETL, accounting, controllership, financial reporting, process improvement, leadership, data mining, reporting tools, financial systems, forecasting, budgeting, variance analysis</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Amazon</Employername>
      <Employerlogo>https://logos.yubhub.co/amazon.jobs.png</Employerlogo>
      <Employerdescription>Amazon is a multinational technology company that operates a retail e-commerce platform and cloud computing services.</Employerdescription>
      <Employerwebsite>https://www.amazon.jobs</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.amazon.jobs/en/jobs/10389769/finance-manager-statutory-financial-reporting-and-compliance-accounting?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-15</Postedate>
    </job>
    <job>
      <externalid>6b0c5213-d69</externalid>
      <Title>Technical Account Manager, German OR Polish Speaking</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Technical Account Manager to join our team in Dublin. As a Technical Account Manager, you will work cross-functionally to support Stripe&#39;s largest and most complex users. You will provide a Gold Standard Experience to your assigned accounts&#39; key stakeholders, work with the wider Operations team to provide current state, resources and knowledge to enable Gold Standard Experience across teams interacting directly with the user, and foster long-term user relationships that grow loyalty to Stripe and Stripe products.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide a Gold Standard Experience to your assigned accounts&#39; key stakeholders</li>
<li>Work with the wider Operations team to provide current state, resources and knowledge to enable Gold Standard Experience across teams interacting directly with the user via support channels, external documentation, or product/feature feedback or development</li>
<li>Foster long-term user relationships that grow loyalty to Stripe and Stripe products</li>
<li>Work cross-functionally both internally and within your user&#39;s organizations to provide and implement operational solutions on subjects not limited to fraud/disputes, declines, product adoption and global expansion</li>
<li>Work closely with Customer Success and other user-facing teams as part of a larger effort to support users on Stripe</li>
<li>Lead user-facing meetings both in person and through video chat</li>
<li>Collaborate on the continued design of this support offering</li>
<li>Create user-facing content for long-term solutions</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Minimum 3 years experience in enterprise-level client-facing work</li>
<li>Strong product sense and energized by the challenge of solving difficult user-related problems</li>
<li>Strong written and verbal communication skills in English and (German or Polish) to support regional-based customers</li>
<li>Ability to lead complex integration conversations in a highly consultative and proactive manner</li>
<li>Familiarity with APIs and able to explain API concepts to Stripe&#39;s largest and most technical customers</li>
<li>Familiarity with SQL and is comfortable building basic queries and modifying more complex ones</li>
<li>Strong technical troubleshooting skills and is comfortable interfacing with technical teams</li>
<li>Professional, confident, and collaborative personality; an adept client relationship manager, capable of engaging in business-level and technical conversations at multiple levels of the organization</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Operations-savvy mindset, with an ability to identify and eliminate process friction while continuing to build scalable processes</li>
<li>Experience practicing in small to medium-scale project management</li>
<li>Strong organizational skills and self-starting mindset</li>
<li>Ideal experience with tools like Postman, xCode, Python, Webhooks, ETL</li>
<li>Ideal experience in the payments industry</li>
<li>Good to have Polish or Ukrainian language expertise (Native or Intermediate)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>APIs, SQL, Technical Troubleshooting, Client Relationship Management, Communication, Problem-Solving, Postman, xCode, Python, Webhooks, ETL, Project Management, Organizational Skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7594376?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>42c9cfa4-8e3</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p><strong>Data Platform Engineer</strong></p>
<p>Join AVL and make a direct impact on shaping the future of Data, AI, and Mobility.</p>
<p><strong>Your Responsibilities:</strong></p>
<ul>
<li>Review and stabilise existing platform implementations (Databricks, Foundry – pipelines, Ontology schemas, Workshop applications, Functions, notebooks).</li>
<li>Identify performance bottlenecks, technical debt, and governance gaps across data pipelines and application layers.</li>
<li>Lead Ontology governance and design reviews, acting as a gatekeeper for all schema changes (Object Types, Links, Properties, Actions).</li>
<li>Define and document target data architectures (ingestion, transformation, and consumption layers).</li>
<li>Establish coding standards, naming conventions, repository structures, and Function versioning policies.</li>
<li>Enforce code reviews and technical validation before production deployment through Foundry Branching and Proposal workflows.</li>
<li>Define and implement a structured testing strategy (unit tests for Functions, integration tests, data quality checks, pipeline expectations).</li>
<li>Design and improve CI/CD pipelines and Dev/Test/Prod promotion processes using Foundry Marketplace/DevOps.</li>
<li>Automate deployments, rollbacks, and environment configurations.</li>
<li>Create and maintain architecture documentation (ADRs, data lineage diagrams, Ontology schemas, data flow diagrams).</li>
<li>Design reusable Workshop component libraries, custom widgets, and Slate application patterns.</li>
<li>Design and validate new platform solutions aligned with strategy, security, and governance requirements.</li>
<li>Mentor the development team on architectural thinking and platform best practices (40% hands-on coding, 60% architecture/leadership).</li>
</ul>
<p><strong>Your Profile:</strong></p>
<ul>
<li>Master’s degree in Computer Science, Data Engineering, or a related field.</li>
<li>5+ years of experience in data engineering or platform architecture roles.</li>
<li>Strong expertise in modern data platforms (Databricks, Snowflake, AWS Glue, Azure Synapse, or similar). Foundry experience is strongly preferred but not required.</li>
<li>Advanced skills in Python (PySpark), SQL (Spark SQL), and TypeScript for backend logic and application development.</li>
<li>Experience with distributed data processing (Spark architecture, partitioning strategies, performance optimisation).</li>
<li>Strong understanding of relational databases (PostgreSQL, Oracle, or similar).</li>
<li>Experience with CI/CD workflows, Git branching strategies, and automated testing in data environments.</li>
<li>Solid experience in end-to-end ETL and data transformation processes.</li>
<li>Proven experience in performance optimisation and scalable architecture design.</li>
<li>Experience in defining development standards, interface contracts, and engineering best practices.</li>
<li>Hands-on coding mindset: must write production code daily, not only review or document.</li>
<li>Structured, analytical, and documentation-oriented approach.</li>
<li>Strong communication and technical leadership skills, with very good proficiency in English and French.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>
<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>
<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>
<li>A team where your input and technical decisions truly matter.</li>
<li>A competitive package and benefits.</li>
</ul>
<p><strong>How to Apply:</strong></p>
<p>If you have these qualifications and are looking for a new challenge, we encourage you to apply to discuss it further!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks, Foundry, Python, SQL, TypeScript, Spark, PostgreSQL, CI/CD, Git, ETL, performance optimisation, scalable architecture design, cloud-native ML, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>AVL</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.avl.com.png</Employerlogo>
      <Employerdescription>AVL is a leading mobility technology company that provides concepts, solutions, and methodologies in fields like vehicle development and integration, e-mobility, automated and connected mobility, and software.</Employerdescription>
      <Employerwebsite>https://jobs.avl.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.avl.com/job/Sala-Al-Jadida-Data-Platform-Engineer/1365823133/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Sala Al Jadida, MA</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>