{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/orchestration-tools"},"x-facet":{"type":"skill","slug":"orchestration-tools","display":"Orchestration Tools","count":26},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_10836c16-e0c"},"title":"Senior Staff Operations Engineer, AIOps","description":"<p>Job Title: Senior Staff Operations Engineer, AIOps</p>\n<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>\n<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>\n<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>\n</ul>\n<ul>\n<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>\n</ul>\n<ul>\n<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>\n</ul>\n<ul>\n<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>\n</ul>\n<ul>\n<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>\n</ul>\n<ul>\n<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>\n</ul>\n<ul>\n<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>\n</ul>\n<ul>\n<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>\n</ul>\n<ul>\n<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>\n</ul>\n<ul>\n<li>Proficient in Python or Go.</li>\n</ul>\n<ul>\n<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with cloud-based infrastructure and services.</li>\n</ul>\n<ul>\n<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<ul>\n<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>\n</ul>\n<ul>\n<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>\n</ul>\n<ul>\n<li>Strong communication and interpersonal skills.</li>\n</ul>\n<ul>\n<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>\n</ul>\n<p>Salary: $212,000-$265,000 USD per year.</p>\n<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>\n<p>Workplace Type: Remote eligible.</p>\n<p>Experience Level: Senior.</p>\n<p>Employment Type: Full-time.</p>\n<p>Category: Engineering.</p>\n<p>Industry: Technology.</p>\n<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>\n<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_10836c16-e0c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7644921","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$212,000-$265,000 USD per year","x-skills-required":["AIOps","data catalog architecture","product development","Technical Operations infrastructure","SDLC","infrastructure as code","configuration management","distributed version control","CI/CD","cloud (AWS and/or Google)","AI/automation","workflows","correlation"],"x-skills-preferred":["cloud-based infrastructure and services","containerization and orchestration tools (e.g., Docker, Kubernetes)","DevOps practices and tools (e.g., Jenkins, GitLab)","agile development methodologies and frameworks (e.g., Scrum, Kanban)","strong communication and interpersonal skills","ability to work in a fast-paced environment and adapt to changing priorities"],"datePosted":"2026-04-18T15:56:46.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"engineering","industry":"technology","skills":"AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":212000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53ee0ef3-c62"},"title":"Staff Data Engineer, Analytics Data Engineering","description":"<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>\n<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>\n<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>\n</ul>\n<ul>\n<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>\n</ul>\n<ul>\n<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>\n</ul>\n<ul>\n<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>\n</ul>\n<ul>\n<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>\n</ul>\n<ul>\n<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>\n</ul>\n<ul>\n<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>\n</ul>\n<ul>\n<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>\n</ul>\n<ul>\n<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>\n</ul>\n<ul>\n<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>\n</ul>\n<ul>\n<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>\n</ul>\n<ul>\n<li>Experience leading orchestration or platform modernization efforts at scale</li>\n</ul>\n<ul>\n<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>\n</ul>\n<ul>\n<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>\n</ul>\n<ul>\n<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2 $198,900-$269,100 USD</p>\n<p>US Zone 3 $176,800-$239,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53ee0ef3-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7595183","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$198,900-$269,100 USD","x-skills-required":["SQL","Python","Dimensional data modeling","Schema design","Scalable data architecture","Orchestration tools","dbt"],"x-skills-preferred":["Databricks","Modern lakehouse architectures","Data governance and observability tools","Metrics/semantic layer"],"datePosted":"2026-04-18T15:56:35.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198900,"maxValue":269100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_748970f4-3b6"},"title":"Senior Data Scientist AI Tooling","description":"<p><strong>Job Description</strong></p>\n<p>Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences.</p>\n<p>Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better. Fin can also be combined with our Helpdesk to become a complete solution called the Intercom Customer Service Suite, which provides AI enhanced support for the more complex or high touch queries that require a human agent.</p>\n<p><strong>What&#39;s the Opportunity?</strong></p>\n<p>The Research, Analytics &amp; Data Science (RAD) team turns insight into action. We uncover customer, product, and business insights and translate them into tools and decision systems embedded directly into GTM workflows.</p>\n<p>AI has unlocked an entirely new generation of internal tools for our GTM teams. We’re evolving from static dashboards to LLM and agent-powered workflows that do the work: auto-researching accounts, summarizing prior interactions, drafting personalized outreach, flagging renewal risk, and assembling decks and docs - enabling Sales and Success to focus on high-value conversations.</p>\n<p><strong>What Will I Be Doing?</strong></p>\n<ul>\n<li>Design, evaluate, and ship AI-powered internal tools for GTM use cases - including account research &amp; summaries, next-best-action recommendations, renewal propensity, pipeline risk detection, QBR/autobrief generation, and post-call summarization &amp; follow-ups.</li>\n<li>Work end-to-end: Own the full lifecycle, from problem definition and data modeling to building production-ready tools, including writing Python backends and React frontends.</li>\n<li>Prototype fast, ship to learn: Rapidly build with users, then productionize quickly to iterate and deliver impact.</li>\n<li>Instrument for adoption and outcomes: Define success through real usage and measurable business impact (e.g., improved win rate, conversion, expansion).</li>\n<li>Evangelize and enable: Document playbooks, run enablement sessions, and help leaders operationalize new tooling across teams.</li>\n</ul>\n<p><strong>What Skills Do I Need?</strong></p>\n<ul>\n<li>Proven track record of applied data science with measurable GTM impact - you’ve shipped models or tools that moved metrics like conversion, cycle time, or retention.</li>\n<li>LLM/ML application experience - familiarity with RAG, prompt and tool design, vector search, evals and have leveraged AI for development.</li>\n<li>Excellent SQL skills and fluency in Python or R, with experience applying analytical and statistical methods to business problems.</li>\n<li>Experience with orchestration tools (e.g., DBT, Airflow) for deploying reliable data workflows.</li>\n<li>Strong communication and empathy - ability to translate complex data concepts for non-technical stakeholders.</li>\n<li>Collaborative product mindset - comfort working closely with Sales and Success teams to turn ambiguity into clear deliverables.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>\n<ul>\n<li>Competitive salary and equity in a fast-growing start-up</li>\n<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>\n<li>Regular compensation reviews - we reward great work</li>\n<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>\n<li>Open vacation policy and flexible holidays so you can take time off when you need it</li>\n<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>\n<li>MacBooks are our standard, but we’re happy to get you whatever equipment helps you get your job done</li>\n</ul>\n<p><strong>Policies</strong></p>\n<p>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home. We expect employees to be in the office at least three days per week.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_748970f4-3b6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7606649","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["applied data science","LLM/ML application experience","Python","R","SQL","orchestration tools","strong communication and empathy","collaborative product mindset"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:06.215Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"applied data science, LLM/ML application experience, Python, R, SQL, orchestration tools, strong communication and empathy, collaborative product mindset"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d29bce76-4c3"},"title":"Senior Data Scientist AI Tooling","description":"<p>Job Description:</p>\n<p>We&#39;re looking for a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. The RAD team turns insight into action by uncovering customer, product, and business insights and translating them into tools and decision systems embedded directly into GTM workflows.</p>\n<p>AI has unlocked an entirely new generation of internal tools for our GTM teams. We&#39;re evolving from static dashboards to LLM and agent-powered workflows that do the work: auto-researching accounts, summarizing prior interactions, drafting personalized outreach, flagging renewal risk, and assembling decks and docs - enabling Sales and Success to focus on high-value conversations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, evaluate, and ship AI-powered internal tools for GTM use cases - including account research &amp; summaries, next-best-action recommendations, renewal propensity, pipeline risk detection, QBR/autobrief generation, and post-call summarization &amp; follow-ups.</li>\n</ul>\n<ul>\n<li>Work end-to-end: Own the full lifecycle, from problem definition and data modeling to building production-ready tools, including writing Python backends and React frontends.</li>\n</ul>\n<ul>\n<li>Prototype fast, ship to learn: Rapidly build with users, then productionize quickly to iterate and deliver impact.</li>\n</ul>\n<ul>\n<li>Instrument for adoption and outcomes: Define success through real usage and measurable business impact (e.g., improved win rate, conversion, expansion).</li>\n</ul>\n<ul>\n<li>Evangelize and enable: Document playbooks, run enablement sessions, and help leaders operationalize new tooling across teams.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Proven track record of applied data science with measurable GTM impact - you&#39;ve shipped models or tools that moved metrics like conversion, cycle time, or retention.</li>\n</ul>\n<ul>\n<li>LLM/ML application experience - familiarity with RAG, prompt and tool design, vector search, evals and have leveraged AI for development.</li>\n</ul>\n<ul>\n<li>Excellent SQL skills and fluency in Python or R, with experience applying analytical and statistical methods to business problems.</li>\n</ul>\n<ul>\n<li>Experience with orchestration tools (e.g., DBT, Airflow) for deploying reliable data workflows.</li>\n</ul>\n<ul>\n<li>Strong communication and empathy - ability to translate complex data concepts for non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Collaborative product mindset - comfort working closely with Sales and Success teams to turn ambiguity into clear deliverables.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and equity in a fast-growing start-up</li>\n</ul>\n<ul>\n<li>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</li>\n</ul>\n<ul>\n<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>\n</ul>\n<ul>\n<li>Regular compensation reviews - we reward great work</li>\n</ul>\n<ul>\n<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>\n</ul>\n<ul>\n<li>Open vacation policy and flexible holidays so you can take time off when you need it</li>\n</ul>\n<ul>\n<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>\n</ul>\n<ul>\n<li>MacBooks are our standard, but we&#39;re happy to get you whatever equipment helps you get your job done</li>\n</ul>\n<p>Policies:</p>\n<ul>\n<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home. We expect employees to be in the office at least three days per week.</li>\n</ul>\n<ul>\n<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone. As an organization, our policy is to not advocate on behalf of the company or our employees on any social or political topics out of our internal or external communications. We respect personal opinion and expression on these topics on personal social platforms on personal time, and do not challenge or confront anyone for their views on non-work related topics. Our goal is to focus on doing incredible work to achieve our goals and unite the company through our core values.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d29bce76-4c3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7606638","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["applied data science","LLM/ML application experience","SQL skills","Python or R","orchestration tools","strong communication and empathy","collaborative product mindset"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:37.527Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"applied data science, LLM/ML application experience, SQL skills, Python or R, orchestration tools, strong communication and empathy, collaborative product mindset"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_98161ddd-28c"},"title":"Data Analyst III","description":"<p>Why join us</p>\n<p>Brex is a finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>\n<p>We&#39;re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>\n<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>\n<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>\n<p>What you&#39;ll do</p>\n<p>As a senior Data Analyst (DA III), you will own the end-to-end analytics lifecycle for one or more business areas at Brex.</p>\n<p>You&#39;ll go beyond building dashboards,you&#39;ll frame the right questions, design rigorous analyses, apply statistical methods, and translate your findings into clear recommendations for leadership.</p>\n<p>You will also serve as a technical leader on the Data Analytics team, mentoring more junior analysts and helping define the standards and best practices that elevate the team&#39;s work.</p>\n<p>This role sits at the intersection of analytics, analytics engineering, and business strategy.</p>\n<p>You&#39;ll work in a modern data stack environment and partner closely with Data Scientists, Data Engineers, and senior leaders across the organization.</p>\n<p>Where you&#39;ll work</p>\n<p>This role will be based in our San Francisco office.</p>\n<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>\n<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>\n<p>As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<ul>\n<li>Own the analytics lifecycle for assigned business areas: from problem framing and data sourcing through analysis, insight generation, and stakeholder presentation.</li>\n</ul>\n<ul>\n<li>Build and maintain dashboards and self-service reporting tools that enable business teams to independently track performance, identify risks, and make data-driven decisions.</li>\n</ul>\n<ul>\n<li>Write production-quality SQL and Python code to extract, transform, and analyze data at scale.</li>\n</ul>\n<ul>\n<li>Collaborate with Data Engineers and Data Scientists to develop and maintain analytical data models, improve data pipelines, and ensure data quality across the organization.</li>\n</ul>\n<ul>\n<li>Partner with leadership across Sales, Operations, Product, Finance, and other departments to identify high-impact analytical opportunities and deliver actionable recommendations.</li>\n</ul>\n<ul>\n<li>Mentor other data analysts and contribute to the development of team standards, documentation, code review practices, and analytical frameworks.</li>\n</ul>\n<ul>\n<li>Proactively identify gaps in data infrastructure, propose improvements, and contribute to the evolution of the team’s tooling and processes.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of experience in data analytics, business intelligence, or a related quantitative role.</li>\n</ul>\n<ul>\n<li>3+ years of experience partnering directly with Sales, Operations, Product, or equivalent business teams as an embedded analytics partner.</li>\n</ul>\n<ul>\n<li>Advanced SQL proficiency, including CTEs, window functions, performance optimization, and working across complex data models.</li>\n</ul>\n<ul>\n<li>Proficiency in Python for data analysis, automation, and modeling (Pandas, NumPy, scikit-learn, or similar).</li>\n</ul>\n<ul>\n<li>Experience with cloud data warehouses, particularly Snowflake (BigQuery and Databricks also valued).</li>\n</ul>\n<ul>\n<li>Hands-on experience with BI and data visualization tools (Looker, Tableau, Hex, or similar).</li>\n</ul>\n<ul>\n<li>Strong stakeholder management skills,proven ability to present complex technical findings to non-technical audiences.</li>\n</ul>\n<ul>\n<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>\n</ul>\n<p>Bonus points</p>\n<ul>\n<li>Demonstrated experience applying statistical methods to business problems (e.g., regression, classification, A/B testing).</li>\n</ul>\n<ul>\n<li>Experience with dbt for data modeling and transformation.</li>\n</ul>\n<ul>\n<li>Experience building and maintaining data pipelines using orchestration tools such as Airflow.</li>\n</ul>\n<ul>\n<li>Experience working with APIs for data ingestion and integration.</li>\n</ul>\n<ul>\n<li>Familiarity with version control systems (Git).</li>\n</ul>\n<ul>\n<li>Experience in fintech, financial services, or payments.</li>\n</ul>\n<ul>\n<li>Track record of leading cross-functional analytics projects from scoping through delivery.</li>\n</ul>\n<p>Compensation</p>\n<p>The expected salary range for this role is $114,192 - $142,740.</p>\n<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_98161ddd-28c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463699002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,192 - $142,740","x-skills-required":["Advanced SQL","Python","Cloud data warehouses","BI and data visualization tools","Stakeholder management","Generative AI and LLM-based tools"],"x-skills-preferred":["Statistical methods","dbt for data modeling and transformation","Orchestration tools","APIs for data ingestion and integration","Version control systems","Fintech, financial services, or payments"],"datePosted":"2026-04-18T15:51:38.936Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Advanced SQL, Python, Cloud data warehouses, BI and data visualization tools, Stakeholder management, Generative AI and LLM-based tools, Statistical methods, dbt for data modeling and transformation, Orchestration tools, APIs for data ingestion and integration, Version control systems, Fintech, financial services, or payments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114192,"maxValue":142740,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_015afe59-9fd"},"title":"Data Analyst II","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>\n<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>\n<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>\n<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>\n<p>What you’ll do</p>\n<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>\n<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>\n<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our New York office.</p>\n<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>\n<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>\n<p>As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>\n<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>\n<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>\n<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>\n<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>\n<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>\n<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>\n<p>Requirements</p>\n<p>3+ years of experience in data analytics or a related role in a professional setting.</p>\n<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>\n<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>\n<p>Experience with Python for data analysis, automation, or scripting.</p>\n<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>\n<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>\n<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>\n<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>\n<p>Bonus points</p>\n<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>\n<p>Familiarity with dbt for data modeling and transformation.</p>\n<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>\n<p>Experience in fintech, financial services, or payments.</p>\n<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>\n<p>Compensation</p>\n<p>The expected salary range for this role is $93,600 - $117,000.</p>\n<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_015afe59-9fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463702002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$93,600 - $117,000","x-skills-required":["SQL","Python","Business Intelligence","Data Visualization","Generative AI","LLM-based tools"],"x-skills-preferred":["Cloud data platforms","dbt","Data pipeline orchestration tools","Fintech","Financial services","Payments"],"datePosted":"2026-04-18T15:50:50.572Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":93600,"maxValue":117000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f89bfa06-9c8"},"title":"Staff Engineer - Salesforce Developer","description":"<p>We are looking for a Staff Engineer to join our growing team in Business Technology (BT) and to help scale our business solutions while providing an extra focus on security, enabling Okta to be the most efficient, scalable, and reliable company.</p>\n<p>In this role, you will be responsible for designing and developing customizations, extensions, configurations, and integrations required to meet the company’s strategic business objectives. You will work collaboratively with Engineering Managers, business stakeholders, Product Owners, Program analysts, and engineers on different program design, development, deployment, and support.</p>\n<p>Core competencies expected of a Staff Engineer include operating with a high degree of autonomy, technical leadership, and project ownership. This includes architectural ownership and design, project and delivery leadership, mentorship and technical bar-setting, cross-functional influence, and future-forward technical skills.</p>\n<p>High-value skills include the ability to build Agents using Agentforce or by leveraging open source libraries to build agents, proficiency in using GitHub Copilot or Cursor or AI workflow orchestration tools, and strategic influence on technology roadmap.</p>\n<p>Qualifications include 7+ years of software development experience with experience in Java, Python, or equivalent, 5+ years&#39; hands-on Salesforce development with solid knowledge of Apex, Process Automation, and LWC, and experience in architecture, design, and implementation of various high-complexity projects/programs for Sales Cloud, CPQ, Service Cloud Console, etc.</p>\n<p>Our team is collaborative, innovative, and flexible, and we consider work-life balance a top priority.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f89bfa06-9c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7348510","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Python","Apex","Process Automation","LWC","Agentforce","GitHub Copilot","Cursor","AI workflow orchestration tools"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:27.103Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Python, Apex, Process Automation, LWC, Agentforce, GitHub Copilot, Cursor, AI workflow orchestration tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7a6a5e65-740"},"title":"Data Analyst III","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations. Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>\n<p>What you’ll do</p>\n<p>As a senior Data Analyst (DA III), you will own the end-to-end analytics lifecycle for one or more business areas at Brex. You’ll go beyond building dashboards,you’ll frame the right questions, design rigorous analyses, apply statistical methods, and translate your findings into clear recommendations for leadership. You will also serve as a technical leader on the Data Analytics team, mentoring more junior analysts and helping define the standards and best practices that elevate the team’s work.</p>\n<p>This role sits at the intersection of analytics, analytics engineering, and business strategy. You’ll work in a modern data stack environment and partner closely with Data Scientists, Data Engineers, and senior leaders across the organization.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our New York office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<ul>\n<li>Own the analytics lifecycle for assigned business areas: from problem framing and data sourcing through analysis, insight generation, and stakeholder presentation.</li>\n</ul>\n<ul>\n<li>Build and maintain dashboards and self-service reporting tools that enable business teams to independently track performance, identify risks, and make data-driven decisions.</li>\n</ul>\n<ul>\n<li>Write production-quality SQL and Python code to extract, transform, and analyze data at scale.</li>\n</ul>\n<ul>\n<li>Collaborate with Data Engineers and Data Scientists to develop and maintain analytical data models, improve data pipelines, and ensure data quality across the organization.</li>\n</ul>\n<ul>\n<li>Partner with leadership across Sales, Operations, Product, Finance, and other departments to identify high-impact analytical opportunities and deliver actionable recommendations.</li>\n</ul>\n<ul>\n<li>Mentor other data analysts and contribute to the development of team standards, documentation, code review practices, and analytical frameworks.</li>\n</ul>\n<ul>\n<li>Proactively identify gaps in data infrastructure, propose improvements, and contribute to the evolution of the team’s tooling and processes.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of experience in data analytics, business intelligence, or a related quantitative role.</li>\n</ul>\n<ul>\n<li>3+ years of experience partnering directly with Sales, Operations, Product, or equivalent business teams as an embedded analytics partner.</li>\n</ul>\n<ul>\n<li>Advanced SQL proficiency, including CTEs, window functions, performance optimization, and working across complex data models.</li>\n</ul>\n<ul>\n<li>Proficiency in Python for data analysis, automation, and modeling (Pandas, NumPy, scikit-learn, or similar).</li>\n</ul>\n<ul>\n<li>Experience with cloud data warehouses, particularly Snowflake (BigQuery and Databricks also valued).</li>\n</ul>\n<ul>\n<li>Hands-on experience with BI and data visualization tools (Looker, Tableau, Hex, or similar).</li>\n</ul>\n<ul>\n<li>Strong stakeholder management skills,proven ability to present complex technical findings to non-technical audiences.</li>\n</ul>\n<ul>\n<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>\n</ul>\n<p>Bonus points</p>\n<ul>\n<li>Demonstrated experience applying statistical methods to business problems (e.g., regression, classification, A/B testing).</li>\n</ul>\n<ul>\n<li>Experience with dbt for data modeling and transformation.</li>\n</ul>\n<ul>\n<li>Experience building and maintaining data pipelines using orchestration tools such as Airflow.</li>\n</ul>\n<ul>\n<li>Experience working with APIs for data ingestion and integration.</li>\n</ul>\n<ul>\n<li>Familiarity with version control systems (Git).</li>\n</ul>\n<ul>\n<li>Experience in fintech, financial services, or payments.</li>\n</ul>\n<ul>\n<li>Track record of leading cross-functional analytics projects from scoping through delivery.</li>\n</ul>\n<p>Compensation</p>\n<p>The expected salary range for this role is $114,192 - $142,740. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7a6a5e65-740","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463704002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$114,192 - $142,740","x-skills-required":["SQL","Python","Cloud data warehouses","BI and data visualization tools","Stakeholder management","Generative AI and LLM-based tools"],"x-skills-preferred":["Statistical methods","dbt for data modeling and transformation","Orchestration tools","APIs for data ingestion and integration","Version control systems","Fintech, financial services, or payments"],"datePosted":"2026-04-18T15:45:08.268Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Cloud data warehouses, BI and data visualization tools, Stakeholder management, Generative AI and LLM-based tools, Statistical methods, dbt for data modeling and transformation, Orchestration tools, APIs for data ingestion and integration, Version control systems, Fintech, financial services, or payments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114192,"maxValue":142740,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3d22e39a-bde"},"title":"Data Analyst II","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>\n<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>\n<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>\n<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Data at Brex</p>\n<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>\n<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>\n<p>What you’ll do</p>\n<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>\n<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>\n<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our San Francisco office.</p>\n<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>\n<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>\n<p>As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>\n<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>\n<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>\n<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>\n<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>\n<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>\n<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>\n<p>Requirements</p>\n<p>3+ years of experience in data analytics or a related role in a professional setting.</p>\n<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>\n<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>\n<p>Experience with Python for data analysis, automation, or scripting.</p>\n<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>\n<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>\n<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>\n<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>\n<p>Bonus points</p>\n<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>\n<p>Familiarity with dbt for data modeling and transformation.</p>\n<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>\n<p>Experience in fintech, financial services, or payments.</p>\n<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>\n<p>Compensation</p>\n<p>The expected salary range for this role is $93,600 - $117,000.</p>\n<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3d22e39a-bde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463696002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$93,600 - $117,000","x-skills-required":["SQL","Python","Business Intelligence","Data Visualization","Generative AI","LLM-based tools"],"x-skills-preferred":["Cloud data platforms","dbt","Data pipeline orchestration tools","Fintech","Financial services","Payments"],"datePosted":"2026-04-18T15:44:50.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":93600,"maxValue":117000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f11cbe31-495"},"title":"Software Engineer","description":"<p>Join the team as our next Software Engineer.</p>\n<p>This position is needed to build and maintain reliable applications for Twilio&#39;s supply insights and trust. The work involves developing back-end applications and front-end for internal tools.</p>\n<p>As a Software Engineer in the team, you will be partnering with product managers, architects, engineering managers and other engineers to develop features for Messaging Supply products. You will be developing our messaging supply platform with emphasis on interfaces for Twilio&#39;s suppliers to interact with Twilio, automation of manual tasks, and working on new features that support both internal and customer-facing applications.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, test and deploy features alongside a small, distributed collaborative team to build highly scalable and available services</li>\n</ul>\n<ul>\n<li>Collaborate other cross-functional teams, product managers, designers, and engineers to build compelling user experiences for developers and end users</li>\n</ul>\n<ul>\n<li>Ensure quality by writing unit, integration, and load tests, as well as conducting thorough code reviews.</li>\n</ul>\n<ul>\n<li>Work independently to troubleshoot/determine resolution for issues in your team&#39;s domain</li>\n</ul>\n<ul>\n<li>Build new features for both internal and customer-facing applications to ensure seamless integration and great customer experience</li>\n</ul>\n<p>Qualifications:</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!</p>\n<p>Required:</p>\n<ul>\n<li>At least 2 years of experience with full-stack software engineering</li>\n</ul>\n<ul>\n<li>Strong Computer Science fundamentals, not limited to data structures, algorithms, operating systems, and distributed systems</li>\n</ul>\n<ul>\n<li>Knowledge of processes and engineering best practices in all phases of the software development lifecycle, such as testing and devops standards</li>\n</ul>\n<ul>\n<li>Proficiency in at least one programming language, web stack and framework</li>\n</ul>\n<ul>\n<li>Strong oral and written communication skills (in English): be prepared to frequently propose and discuss ideas and implementation details with your teammates, as well as involving other stakeholders in Twilio - we’re one single team, no one flies solo!</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Experience working with Java frameworks like Spring, Hibernate, Dropwizard.</li>\n</ul>\n<ul>\n<li>Experience working with React or a different web development framework</li>\n</ul>\n<ul>\n<li>Good understanding of DevOps CI/CD pipeline</li>\n</ul>\n<ul>\n<li>Experience working with agile/scrum methodologies</li>\n</ul>\n<ul>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>\n</ul>\n<ul>\n<li>Experience documenting your solutions and proposals</li>\n</ul>\n<p>Location</p>\n<p>This role will be remote from Estonia.</p>\n<p>Travel</p>\n<p>We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings.</p>\n<p>What We Offer</p>\n<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>\n<p>Twilio thinks big. Do you?</p>\n<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>\n<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f11cbe31-495","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7647708","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["full-stack software engineering","Computer Science fundamentals","processes and engineering best practices","proficiency in at least one programming language","web stack and framework"],"x-skills-preferred":["Java frameworks like Spring, Hibernate, Dropwizard","React or a different web development framework","DevOps CI/CD pipeline","agile/scrum methodologies","containerization and orchestration tools (e.g., Docker, Kubernetes)"],"datePosted":"2026-04-18T15:44:06.837Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Estonia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"full-stack software engineering, Computer Science fundamentals, processes and engineering best practices, proficiency in at least one programming language, web stack and framework, Java frameworks like Spring, Hibernate, Dropwizard, React or a different web development framework, DevOps CI/CD pipeline, agile/scrum methodologies, containerization and orchestration tools (e.g., Docker, Kubernetes)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18ae1499-b22"},"title":"Research Engineer, Discovery","description":"<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>\n<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>\n<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>\n<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>\n<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>\n<li>Develop large scale data pipelines to handle advanced language model training requirements</li>\n<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>\n<li>Are a strong communicator and enjoy working collaboratively</li>\n<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>\n<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>\n<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>\n<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>\n<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>\n<li>Have experience collaborating with other researchers to scale experimental ideas</li>\n<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>\n<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>\n<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>\n<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>\n<li>Familiarity with VM and container orchestration</li>\n<li>Experience with workflow orchestration tools and experiment management systems</li>\n<li>History working with large scale reinforcement learning</li>\n<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>\n</ul>\n<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18ae1499-b22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4669581008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["large-scale distributed systems","containerization technologies (Docker, Kubernetes)","performance optimization techniques","system architectures for high-throughput ML workloads","data pipelines","distributed storage systems","ML frameworks (PyTorch, JAX, etc.)","GPU/TPU architectures","cloud platforms (AWS, GCP)","VM and container orchestration","workflow orchestration tools","experiment management systems","reinforcement learning","large scale data pipelines (Beam, Spark, Dask, …)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:42.408Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bc12a602-5fc"},"title":"Software Engineer","description":"<p>Join the team as Twilio&#39;s next Software Engineer.</p>\n<p>This position is needed to develop the future platform of communications. Twilio SMS Engineering is looking for a Software Engineer to join our team to work on our SMS connectivity layer with the purpose to build and optimize for delivery.</p>\n<p>You will be developing a complex distributed platform in Java and be concerned with availability, throughput, latency, and data integrity. At the core are cloud technologies that enable deployment and management of computing resources globally.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, test and deploy features alongside an experienced, distributed collaborative team</li>\n</ul>\n<ul>\n<li>Participating in code reviews to ensure code quality and adherence to coding standards.</li>\n</ul>\n<ul>\n<li>Work independently to troubleshoot/determine resolution for issues in your team&#39;s domain</li>\n</ul>\n<ul>\n<li>Managing your work through the use of Github, Jira, and our build/deploy systems</li>\n</ul>\n<ul>\n<li>Ensure quality by writing unit-, integration- and load-tests</li>\n</ul>\n<ul>\n<li>Collaborating with cross-functional teams to define, design, and ship new features.</li>\n</ul>\n<p>Qualifications:</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>\n<p>We are always looking for people who will bring something new to the table!</p>\n<p>*Required:</p>\n<ul>\n<li>Experience with Java frameworks such as Dropwizard, Spring, Hibernate, or similar.</li>\n</ul>\n<ul>\n<li>Experience with cloud services (AWS preferred, Google, Azure etc.)</li>\n</ul>\n<ul>\n<li>Strong Computer Science fundamentals not limited to data structures, algorithms, operating systems, and distributed systems</li>\n</ul>\n<ul>\n<li>Knowledge of processes and engineering best practices in all phases of the software development life cycle</li>\n</ul>\n<ul>\n<li>Readiness to participate in the on-call rotation</li>\n</ul>\n<ul>\n<li>Strong communication skills and desire to make an impact and thrive in small, collaborative, energetic teams</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Experience with microservice architecture</li>\n</ul>\n<ul>\n<li>Experience working with Agile/Scrum methodologies</li>\n</ul>\n<ul>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bc12a602-5fc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7699251","x-work-arrangement":"remote","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Dropwizard","Spring","Hibernate","cloud services","AWS","Google","Azure","Computer Science","data structures","algorithms","operating systems","distributed systems","processes","engineering best practices"],"x-skills-preferred":["microservice architecture","Agile/Scrum methodologies","containerization","orchestration tools","Docker","Kubernetes"],"datePosted":"2026-04-18T15:41:10.523Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Estonia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Dropwizard, Spring, Hibernate, cloud services, AWS, Google, Azure, Computer Science, data structures, algorithms, operating systems, distributed systems, processes, engineering best practices, microservice architecture, Agile/Scrum methodologies, containerization, orchestration tools, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f904cf7-7bd"},"title":"Data Analyst II","description":"<p>Join us at Brex, the intelligent finance platform that empowers companies to spend smarter and move faster in over 200 markets. As a Data Analyst II, you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>\n<p>As a member of our Data organization, you will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses. This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</li>\n<li>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</li>\n<li>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</li>\n<li>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</li>\n<li>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</li>\n<li>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</li>\n<li>Contribute to the automation of recurring analyses and reporting workflows using Python.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of experience in data analytics or a related role in a professional setting.</li>\n<li>3+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</li>\n<li>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</li>\n<li>Proficiency in Python for data analysis, automation, and scripting (Pandas, NumPy, and similar libraries).</li>\n<li>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</li>\n<li>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</li>\n<li>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</li>\n<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>\n</ul>\n<p>Bonus points:</p>\n<ul>\n<li>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</li>\n<li>Familiarity with dbt for data modeling and transformation.</li>\n<li>Exposure to data pipeline orchestration tools (e.g., Airflow).</li>\n<li>Experience in fintech, financial services, or payments.</li>\n<li>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f904cf7-7bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex LLC","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8463703002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","Business Intelligence","Data Visualization","Generative AI","LLM-based tools"],"x-skills-preferred":["Cloud data platforms","dbt","Data pipeline orchestration tools","Fintech, financial services, or payments"],"datePosted":"2026-04-18T15:39:28.984Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"São Paulo, São Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, financial services, or payments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_586b9fef-509"},"title":"Senior Software Engineer - Network Enablement (Applied ML)","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>On this team, you will build and operate the ML infrastructure and product services that enable trust and intelligence across Plaid&#39;s network. You&#39;ll own feature engineering, offline training and batch scoring, online feature serving, and real-time inference so model outputs directly power partner-facing fraud &amp; trust products and bank intelligence features.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Embed model inference into Network Enablement product flows and decision logic (APIs, feature flags, backend flows).</li>\n<li>Define and instrument product + ML success metrics (fraud reduction, retention lift, false positives, downstream impact).</li>\n<li>Design and run experiments and rollout plans (backtesting, shadow scoring, A/B tests, feature-flagged releases) to validate product hypotheses.</li>\n<li>Build and operate offline training pipelines and production batch scoring for bank intelligence products.</li>\n<li>Ship and maintain online feature serving and low-latency model inference endpoints for real-time partner/bank scoring.</li>\n<li>Implement model CI/CD, model/version registry, and safe rollout/rollback strategies.</li>\n<li>Monitor model/data health: drift/regression detection, model-quality dashboards, alerts, and SLOs targeted to partner product needs.</li>\n<li>Ensure offline and online parity, data lineage, and automated validation / data contracts to reduce regressions.</li>\n<li>Optimize inference performance and cost for real-time scoring (batching, caching, runtime selection).</li>\n<li>Ensure fairness, explainability and PII-aware handling for partner-facing ML features; maintain auditability for compliance.</li>\n<li>Partner with platform and cross-functional teams to scale the ML/data foundation (graph features, sequence embeddings, unified pipelines).</li>\n<li>Mentor engineers and document team standards for ML productization and operations.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Must-haves:</li>\n<li>Strong software engineering skills including systems design, APIs, and building reliable backend services (Go or Python preferred).</li>\n<li>Production experience with batch and streaming data pipelines and orchestration tools such as Airflow or Spark.</li>\n<li>Experience building or operating real-time scoring and online feature-serving systems, including feature stores and low-latency model inference.</li>\n<li>Experience integrating model outputs into product flows (APIs, feature flags) and measuring impact through experiments and product metrics.</li>\n<li>Experience with model lifecycle and operations: model registries, CI/CD for models, reproducible training, offline &amp; online parity, monitoring and incident response.</li>\n<li>Nice to have:</li>\n<li>Experience in fraud, risk, or marketing intelligence domains.</li>\n<li>Experience with feature-store products (Tecton / Chronon / Feast / internal) and unified pipelines.</li>\n<li>Experience with graph frameworks, graph feature engineering, or sequence embeddings.</li>\n<li>Experience optimizing inference at scale (Triton/ONNX/quantization, batching, caching).</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_586b9fef-509","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/43b1374d-5c5e-4b63-b710-a95e3cb76bbe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$286,800 per year","x-skills-required":["software engineering","systems design","APIs","backend services","Go","Python","batch and streaming data pipelines","orchestration tools","Airflow","Spark","real-time scoring","online feature-serving systems","feature stores","low-latency model inference","model outputs","product flows","experiments","product metrics","model lifecycle","operations","model registries","CI/CD","reproducible training","offline & online parity","monitoring","incident response"],"x-skills-preferred":["fraud","risk","marketing intelligence","feature-store products","unified pipelines","graph frameworks","graph feature engineering","sequence embeddings","inference at scale","Triton","ONNX","quantization","batching","caching"],"datePosted":"2026-04-17T12:51:26.228Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, systems design, APIs, backend services, Go, Python, batch and streaming data pipelines, orchestration tools, Airflow, Spark, real-time scoring, online feature-serving systems, feature stores, low-latency model inference, model outputs, product flows, experiments, product metrics, model lifecycle, operations, model registries, CI/CD, reproducible training, offline & online parity, monitoring, incident response, fraud, risk, marketing intelligence, feature-store products, unified pipelines, graph frameworks, graph feature engineering, sequence embeddings, inference at scale, Triton, ONNX, quantization, batching, caching","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":286800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a2e88648-d1d"},"title":"Mistral Cloud - Site Reliability Engineer","description":"<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our Cloud platform and customer facing applications.</p>\n<p>You will work closely with our software engineers and product teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, build, and maintain scalable, highly available and fault-tolerant infrastructures</li>\n<li>Operate systems and troubleshoot issues in production environments</li>\n<li>Implement and improve monitoring, alerting, and incident response systems</li>\n<li>Implement and maintain workflows and tools for both our customer-facing APIs and large training runs</li>\n</ul>\n<p>Development responsibilities include:</p>\n<ul>\n<li>Drive continuous improvement in infrastructure automation, deployment, and orchestration</li>\n<li>Collaborate with software engineers to develop and implement solutions that enable safe and reproducible model-training experiments</li>\n<li>Help build a cloud platform offering an abstraction layer between science, engineering and infrastructure</li>\n<li>Design and develop new workflows and tooling to improve the reliability, availability and performance of our systems</li>\n</ul>\n<p>Additional responsibilities include:</p>\n<ul>\n<li>Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</li>\n<li>Document processes and procedures to ensure consistency and knowledge sharing across the team</li>\n<li>Contribute to open-source projects, research publications, blog articles and conferences</li>\n</ul>\n<p>About you:</p>\n<ul>\n<li>Master’s degree in Computer Science, Engineering or a related field</li>\n<li>5+ years of experience in a DevOps/SRE role</li>\n<li>Strong experience with bare metal infrastructure and highly available distributed systems</li>\n<li>Exposure to site reliability issues in critical environments</li>\n<li>Experience working against reliability KPIs</li>\n<li>Hands-on experience with CI/CD, containerization and orchestration tools</li>\n<li>Knowledge of monitoring, logging, alerting and observability tools</li>\n<li>Familiarity with infrastructure-as-code tools</li>\n<li>Proficiency in scripting languages and knowledge of software development best practices</li>\n<li>Strong understanding of networking, security, and system administration concepts</li>\n<li>Excellent problem-solving and communication skills</li>\n</ul>\n<p>Your application will be all the more interesting if you also have:</p>\n<ul>\n<li>Experience in an AI/ML environment</li>\n<li>Experience of high-performance computing (HPC) systems and workload managers</li>\n<li>Worked with modern AI-oriented solutions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a2e88648-d1d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/f76907fd-428a-4824-a1cf-8013974fde29","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["bare metal infrastructure","highly available distributed systems","CI/CD","containerization","orchestration tools","monitoring","logging","alerting","observability tools","infrastructure-as-code tools","scripting languages","software development best practices","networking","security","system administration"],"x-skills-preferred":["AI/ML environment","high-performance computing (HPC) systems","workload managers","modern AI-oriented solutions"],"datePosted":"2026-04-17T12:47:48.920Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"bare metal infrastructure, highly available distributed systems, CI/CD, containerization, orchestration tools, monitoring, logging, alerting, observability tools, infrastructure-as-code tools, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing (HPC) systems, workload managers, modern AI-oriented solutions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c01d9b5-3e0"},"title":"AI Engineer","description":"<p>About Belong</p>\n<p>We believe in a world where homes are owned by regular people, not corporations. Our mission is to provide authentic belonging experiences, empowering residents to become homeowners and homeowners to achieve financial freedom.</p>\n<p>The Role</p>\n<p>Belong is looking for an AI Automation Engineer to help transform real-world operations through practical, high-impact AI solutions. You’ll be building and shipping AI-powered workflows that directly improve how our teams operate and how our customers experience Belong.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Build AI-powered applications and workflows that automate and enhance real-world business operations, including evaluation and safety mechanisms.</li>\n<li>Rapidly prototype AI-driven solutions, validate them in real scenarios, and evolve them into production-ready systems.</li>\n<li>Integrate AI capabilities into backend services, internal tools, and external platforms through well-designed APIs and services.</li>\n<li>Own AI-driven initiatives end to end, from early experimentation to production deployment, proactively leveraging AI code generation tools to confidently contribute across the backend and frontend stack when needed.</li>\n<li>Work closely with product, operations, customer support and engineering teams to identify automation opportunities and deliver meaningful impact.</li>\n</ul>\n<p>What We’re Looking For</p>\n<ul>\n<li>Strong programming skills in Python and/or TypeScript.</li>\n<li>Solid software engineering fundamentals and experience building and shipping production systems.</li>\n<li>Experience deploying, operating, and iterating on AI-powered applications.</li>\n<li>Familiarity with modern AI tooling, agent frameworks and workflow orchestration tools.</li>\n<li>A proactive mindset with a strong sense of ownership and the ability to drive initiatives forward.</li>\n<li>Clear communication skills and a collaborative approach to working in cross-functional teams.</li>\n</ul>\n<p>Why Belong</p>\n<ul>\n<li>We’re tackling one of the biggest, most broken industries (housing) and creating something entirely new.</li>\n<li>You’ll work alongside world-class founders and leaders who have scaled successful companies.</li>\n<li>AI isn’t a side project here, it’s at the core of our strategy and product roadmap.</li>\n<li>Competitive compensation, equity, and benefits.</li>\n<li>Ownership, autonomy, and the opportunity to build something that matters.</li>\n</ul>\n<p>If you’re excited about building practical AI solutions, owning problems end to end, and pushing what’s possible in real-world operations, we’d love to talk. Apply now.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c01d9b5-3e0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Belong","sameAs":"https://www.belong.com","logo":"https://logos.yubhub.co/belong.com.png"},"x-apply-url":"https://jobs.lever.co/belong/50109bb9-7e26-4bcc-855d-87da77964fee","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","TypeScript","AI tooling","Agent frameworks","Workflow orchestration tools",".NET","React","Next.js"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:22:29.661Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Buenos Aires"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, AI tooling, Agent frameworks, Workflow orchestration tools, .NET, React, Next.js"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_12b3e7a7-24b"},"title":"Backend Engineer (Data)","description":"<p><strong>Description</strong></p>\n<p>Fuse Energy is a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system. We raised $170M from top-tier investors including Multicoin, Balderton, Lakestar, Accel, Creandum, Lowercarbon, Ribbit, Box Group and strategic angels like Nico Rosberg, the Co-Founder of Solana and GPs behind Meta, Revolut, Spotify, Uber and more.</p>\n<p>We’re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>\n<p>But we’re not stopping there. We’re also building the Energy Network: a decentralised platform of smart devices that rewards users in Energy Dollars for electrifying their homes, shifting usage to off-peak hours, and helping balance the grid. This network strengthens grid stability - a critical foundation for scaling AI data centers and other energy-intensive industries.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs</li>\n<li>Own the design and evolution of analytical schemas, translating business logic into structured, intuitive data models</li>\n<li>Migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability</li>\n<li>Develop and maintain DBT models that reflect our business domain and make data easily accessible for teams</li>\n<li>Implement tests and data quality checks to ensure reliable and trustworthy datasets</li>\n<li>Identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>\n<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>\n<li>Hands-on experience working with relational databases, particularly Postgres</li>\n<li>Experience designing schemas and building data models that reflect real-world business logic</li>\n<li>Familiarity with DBT or similar data transformation frameworks</li>\n<li>Strong understanding of data validation, testing, and quality assurance practices</li>\n</ul>\n<p><strong>Bonus</strong></p>\n<ul>\n<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>\n<li>Experience with CI/CD practices for data pipelines and transformations</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and an equity sign-on bonus</li>\n<li>Biannual bonus scheme</li>\n<li>Fully expensed tech to match your needs</li>\n<li>Paid annual leave</li>\n<li>Breakfast and dinner allowance for office based employees</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_12b3e7a7-24b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fuse Energy","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/f1WFaX5eREjwSWJ8Eo9yzt/hybrid-backend-engineer-(data)-in-london-at-fuse-energy","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Postgres","DBT","Clickhouse"],"x-skills-preferred":["cloud-based data infrastructure","data orchestration tools","CI/CD practices"],"datePosted":"2026-03-09T16:58:27.903Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Postgres, DBT, Clickhouse, cloud-based data infrastructure, data orchestration tools, CI/CD practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_05ea3590-83b"},"title":"Backend Engineer (Data)","description":"<p>You will join a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system.</p>\n<p>We&#39;re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>\n<p><strong>Responsibilities</strong></p>\n<p>You will build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs. This includes owning the design and evolution of analytical schemas, translating business logic into structured, intuitive data models. You will also migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability.</p>\n<p>You will develop and maintain DBT models that reflect our business domain and make data easily accessible for teams. Additionally, you will implement tests and data quality checks to ensure reliable and trustworthy datasets. You will identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>\n<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>\n<li>Hands-on experience working with relational databases, particularly Postgres</li>\n<li>Experience designing schemas and building data models that reflect real-world business logic</li>\n<li>Familiarity with DBT or similar data transformation frameworks</li>\n<li>Strong understanding of data validation, testing, and quality assurance practices</li>\n</ul>\n<p><strong>Bonus</strong></p>\n<ul>\n<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>\n<li>Experience with CI/CD practices for data pipelines and transformations</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and an equity sign-on bonus</li>\n<li>Biannual bonus scheme</li>\n<li>Fully expensed tech to match your needs</li>\n<li>Paid annual leave</li>\n<li>Breakfast and dinner allowance for office-based employees</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_05ea3590-83b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fuse Energy","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/5m73SDXSAwUg5q1c5NGgDA/hybrid-backend-engineer-(data)-in-dubai-at-fuse-energy","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Postgres","DBT","Clickhouse"],"x-skills-preferred":["Cloud-based data infrastructure","Data orchestration tools","CI/CD practices"],"datePosted":"2026-03-09T16:53:16.883Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dubai"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Postgres, DBT, Clickhouse, Cloud-based data infrastructure, Data orchestration tools, CI/CD practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9afb309b-13e"},"title":"Principal Software Engineer","description":"<p>You will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions. As a Principal Software Engineer, you will collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. You will develop and optimize data models to support data analytics, utilize advanced analytics techniques to extract insights from large datasets, and drive data-driven decision making. You will also implement data validation frameworks and monitoring systems to detect and resolve data quality issues, troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery. Additionally, you will work with a security-first mindset, focusing on system scalability and maintainability, and coach and mentor peers and emerging team members while advocating for best practices.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>\n<li>Develop and optimize data models to support data analytics.</li>\n<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</li>\n<li>Implement data validation frameworks and monitoring systems to detect and resolve data quality issues.</li>\n<li>Troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery.</li>\n<li>Work with a security-first mindset, focusing on system scalability and maintainability.</li>\n<li>Coach and mentor peers and emerging team members while advocating for best practices.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>\n<li>6+ years of experience in software engineering, with a focus on data engineering and data analytics.</li>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n<li>Familiarity with cloud platforms and data services.</li>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n<li>Familiarity with Azure.</li>\n<li>Experience with machine learning and data science tools and frameworks.</li>\n<li>Knowledge of data visualization tools (e.g., Tableau, Power BI).</li>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor&#39;s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>8+ years of experience in software engineering, with a focus on data engineering and data analytics.</li>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n<li>Familiarity with cloud platforms and data services.</li>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n<li>Familiarity with Azure.</li>\n<li>Experience with machine learning and data science tools and frameworks.</li>\n<li>Knowledge of data visualization tools (e.g., Tableau, Power BI).</li>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<p>Salary Range:</p>\n<ul>\n<li>The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Microsoft is an equal opportunity employer.</li>\n<li>All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances.</li>\n<li>If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.</li>\n</ul>\n<p>Similar Jobs:</p>\n<ul>\n<li>Sr Account Executive(Advertising) Beijing, China</li>\n<li>Advertising Account Management</li>\n<li>Principal Software Engineer Bengaluru, India</li>\n<li>Software Engineering</li>\n<li>Member of Technical Staff, AI Product, Android Engineer Mountain View, US</li>\n<li>Software Engineering</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9afb309b-13e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft Advertising","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-37/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Hadoop","SQL","RDBMS","Key Value stores","Cloud platforms","Data services","Machine learning","Data science","Tableau","Power BI","Docker","Kubernetes"],"x-skills-preferred":["Master's Degree in Computer Science or related technical field","8+ years technical engineering experience","Expertise in SQL and experience with RDBMS, Key Value stores","Familiarity with cloud platforms and data services","Excellent problem-solving skills","Solid communication skills","Familiarity with Azure","Experience with machine learning and data science tools and frameworks","Knowledge of data visualization tools (e.g., Tableau, Power BI)","Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)"],"datePosted":"2026-03-08T22:14:59.101Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores, Cloud platforms, Data services, Machine learning, Data science, Tableau, Power BI, Docker, Kubernetes, Master's Degree in Computer Science or related technical field, 8+ years technical engineering experience, Expertise in SQL and experience with RDBMS, Key Value stores, Familiarity with cloud platforms and data services, Excellent problem-solving skills, Solid communication skills, Familiarity with Azure, Experience with machine learning and data science tools and frameworks, Knowledge of data visualization tools (e.g., Tableau, Power BI), Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_015e5c6d-a31"},"title":"Senior Data Engineer","description":"<p><strong>Why Valvoline Global Operations?</strong></p>\n<p>At Valvoline Global Operations, we&#39;re proud to be The Original Motor Oil, but we&#39;ve never rested on being first. Founded in 1866, we introduced the world&#39;s first branded motor oil, staking our claim as a pioneer in the automotive and industrial solutions industry.</p>\n<p><strong>Job Purpose</strong></p>\n<p>We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem.</p>\n<p><strong>How You Make an Impact (Job Accountabilities)</strong></p>\n<ul>\n<li>Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.</li>\n<li>Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders&#39; goals.</li>\n<li>Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms</li>\n<li>Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.</li>\n<li>Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.</li>\n<li>Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.</li>\n</ul>\n<p><strong>What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li>\n<li>5-7+ years of experience in a data engineering or related role.</li>\n<li>Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.</li>\n<li>Proficiency in SQL and at least one programming language (e.g., Python, Scala).</li>\n<li>Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.</li>\n<li>Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.</li>\n</ul>\n<p><strong>Competencies Desired</strong></p>\n<ul>\n<li>Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).</li>\n<li>Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.</li>\n<li>SAP BW or SAP HANA experience is a plus.</li>\n<li>Experience with BI tools like Power BI or Tableau.</li>\n<li>Understanding of data governance frameworks and data security best practices.</li>\n<li>Exposure to data lakehouse architecture and real-time streaming data pipelines.</li>\n<li>Certifications in Databricks, SAP, or cloud platforms are advantageous.</li>\n</ul>\n<p><strong>Working Conditions / Physical Requirements / Travel Requirements</strong></p>\n<ul>\n<li>Normal Office environment.</li>\n<li>Prolonged periods of computer use and frequent participation in meetings</li>\n<li>Occasional walking, standing, and light lifting (up to 10 lbs)</li>\n</ul>\n<ul>\n<li>Minimal travel required.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_015e5c6d-a31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Valvoline Global Operations","sameAs":"https://jobs.valvolineglobal.com","logo":"https://logos.yubhub.co/jobs.valvolineglobal.com.png"},"x-apply-url":"https://jobs.valvolineglobal.com/job/Senior-Data-Engineer/1316654400/","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","Databricks","SAP Datasphere","SQL","Python","Scala","cloud platforms","data orchestration tools","CI/CD pipelines"],"x-skills-preferred":["Databricks","SAP Datasphere","SAP BW","SAP HANA","Power BI","Tableau","data governance frameworks","data security best practices","data lakehouse architecture","real-time streaming data pipelines"],"datePosted":"2026-03-08T22:14:37.507Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"data engineering, Databricks, SAP Datasphere, SQL, Python, Scala, cloud platforms, data orchestration tools, CI/CD pipelines, Databricks, SAP Datasphere, SAP BW, SAP HANA, Power BI, Tableau, data governance frameworks, data security best practices, data lakehouse architecture, real-time streaming data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_11a9548c-a4f"},"title":"Staff+ Software Engineer, Developer Productivity","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Role</strong></p>\n<p>Anthropic&#39;s Infrastructure organisation is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>\n<p>Developer Productivity owns the end-to-end experience of how engineers and researchers at Anthropic develop, build, test, and ship code at scale — from the source control and language ecosystems that underpin our monorepo, to the build and CI infrastructure that keeps thousands of daily builds running reliably across multiple cloud providers, to the developer acceleration tooling that deeply integrates Claude into engineering workflows.</p>\n<p>_Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams._</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>\n<li>Define infrastructure architecture, ensuring the hardest problems get solved — whether by you directly or by working through others</li>\n<li>Design and build scalable, reliable distributed infrastructure and shared libraries that support high-volume workloads across all engineering teams</li>\n<li>Own and evolve build environments, package management, and dependency systems to enable fast, reproducible builds</li>\n<li>Define and implement language ecosystem standards, tooling, and frameworks that drive developer productivity across research and production workloads</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building and operating large-scale developer infrastructure</li>\n<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n<li>Have deep experience with build systems, CI/CD pipelines, and/or developer tooling in a large monorepo environment</li>\n<li>Have strong proficiency in Python, Rust and/or Go</li>\n<li>Are obsessed with developer productivity and reducing friction in the software development lifecycle</li>\n<li>Have experience with container orchestration and infrastructure at scale</li>\n<li>Have excellent communication skills and enjoy supporting internal partners to improve their development experience</li>\n<li>Are excited about designing foundational systems and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Experience with CI orchestration tools (Buildkite, Jenkins, GitHub Actions, or similar) and merge queue management at scale</li>\n<li>Experience building or operating remote build execution systems (Bazel Remote Execution API, BuildBarn, BuildBuddy, or similar)</li>\n<li>Experience with Nix/NixOS/Docker and managing large image / package sets at scale</li>\n<li>Experience building CLI tools, developer-facing services, and GitHub API and automation workflows</li>\n</ul>\n<p>_Deadline to apply: None. Applications will be reviewed on a rolling basis._</p>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary:</p>\n<p>$405,000 - $485,000USD</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact work in AI safety and development happens at the intersection of technical expertise and societal responsibility. We&#39;re committed to building a team that reflects a wide range of backgrounds, perspectives, and experiences. We believe that diversity in all its forms drives better decision-making, more innovative solutions, and greater impact.</p>\n<p>We&#39;re an equal opportunities employer and welcome applications from all qualified candidates.</p>\n<p>If you&#39;re excited about this role and want to learn more, please don&#39;t hesitate to reach out to us. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_11a9548c-a4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5110511008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000USD","x-skills-required":["Python","Rust","Go","Build systems","CI/CD pipelines","Developer tooling","Container orchestration","Infrastructure at scale"],"x-skills-preferred":["CI orchestration tools","Merge queue management","Remote build execution systems","Nix/NixOS/Docker","Large image/package sets","CLI tools","Developer-facing services","GitHub API and automation workflows"],"datePosted":"2026-03-08T13:53:03.879Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, Go, Build systems, CI/CD pipelines, Developer tooling, Container orchestration, Infrastructure at scale, CI orchestration tools, Merge queue management, Remote build execution systems, Nix/NixOS/Docker, Large image/package sets, CLI tools, Developer-facing services, GitHub API and automation workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_da726093-b19"},"title":"Research Engineer, Discovery","description":"<p><strong>About the Role</strong></p>\n<p>As a Research Engineer on our team, you will work end to end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on. This may include performance optimization, distributed systems, VM/sandboxing/container deployment, and large scale data pipelines.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>\n<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>\n<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI.</li>\n<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>\n<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>\n<li>Develop large scale data pipelines to handle advanced language model training requirements</li>\n<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>\n<li>Are a strong communicator and enjoy working collaboratively</li>\n<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>\n<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>\n<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>\n<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>\n<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>\n<li>Have experience collaborating with other researchers to scale experimental ideas</li>\n<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>\n<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>\n<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>\n<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>\n<li>Familiarity with VM and container orchestration.</li>\n<li>Experience with workflow orchestration tools and experiment management systems</li>\n<li>History working with large scale reinforcement learning</li>\n<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale projects, and we&#39;re committed to making a positive impact on the world.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_da726093-b19","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4669581008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $850,000 USD","x-skills-required":["infrastructure engineering","large-scale distributed systems","performance optimization","containerization technologies","orchestration at scale","data pipelines","distributed storage systems","complex infrastructure challenges","ML stack","workflow orchestration tools","experiment management systems","reinforcement learning","large scale data pipelines"],"x-skills-preferred":["language model training infrastructure","distributed ML frameworks","GPU/TPU architectures","language model inference optimization","cloud platforms","VM and container orchestration","workflow orchestration tools","experiment management systems","large scale reinforcement learning","large scale data pipelines"],"datePosted":"2026-03-08T13:46:32.661Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"infrastructure engineering, large-scale distributed systems, performance optimization, containerization technologies, orchestration at scale, data pipelines, distributed storage systems, complex infrastructure challenges, ML stack, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines, language model training infrastructure, distributed ML frameworks, GPU/TPU architectures, language model inference optimization, cloud platforms, VM and container orchestration, workflow orchestration tools, experiment management systems, large scale reinforcement learning, large scale data pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6b3b4a98-297"},"title":"Enterprise Product Engineer","description":"<p><strong>About the role</strong></p>\n<p>As an Enterprise Product Engineer at Cursor, you&#39;ll architect, implement, and deploy projects end-to-end to build enterprise-grade features that help large organisations adopt and scale with Cursor.</p>\n<p><strong>You may be a fit if</strong></p>\n<p>You have an entrepreneurial spirit and love creating outsized business impact. You want to be at the frontier of AI transformation with the best companies in the world. You&#39;re passionate about building great products that blend excellent engineering with a taste for models and design. You have a propensity for creative ideas and have a knack for making powerful tools without compromising their ease-of-use.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Architect, implement, and deploy projects end-to-end to build enterprise-grade features that help large organisations adopt and scale with Cursor.</li>\n<li>Collaborate with cross-functional teams to define and deliver product roadmaps that meet business objectives.</li>\n<li>Analyse customer needs and develop solutions that meet their requirements.</li>\n<li>Work closely with the design team to create user-centred products that are both functional and aesthetically pleasing.</li>\n<li>Develop and maintain high-quality code that is scalable, maintainable, and efficient.</li>\n<li>Participate in code reviews to ensure that the codebase is of the highest quality.</li>\n<li>Stay up-to-date with the latest technologies and trends in the industry.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunity to work with a recognised leader in the AI industry.</li>\n<li>Collaborative and dynamic work environment.</li>\n<li>Flexible working hours and remote work options.</li>\n<li>Access to the latest technologies and tools.</li>\n<li>Opportunities for professional growth and development.</li>\n</ul>\n<p><strong>What we&#39;re looking for</strong></p>\n<ul>\n<li>3+ years of experience in software development, preferably in a product engineering role.</li>\n<li>Strong understanding of software development principles, patterns, and best practices.</li>\n<li>Experience with Agile development methodologies and version control systems.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and collaboration skills.</li>\n<li>Experience with cloud-based technologies and containerisation.</li>\n<li>Familiarity with machine learning and AI concepts.</li>\n<li>Experience with design thinking and user-centred design.</li>\n<li>Strong understanding of security principles and best practices.</li>\n<li>Experience with DevOps practices and tools.</li>\n<li>Familiarity with testing frameworks and methodologies.</li>\n<li>Experience with continuous integration and continuous deployment.</li>\n<li>Strong understanding of scalability and performance optimisation.</li>\n<li>Experience with monitoring and logging tools.</li>\n<li>Familiarity with containerisation and orchestration.</li>\n<li>Experience with cloud-based storage and databases.</li>\n<li>Familiarity with security frameworks and best practices.</li>\n<li>Experience with compliance and regulatory requirements.</li>\n<li>Familiarity with industry standards and best practices.</li>\n</ul>\n<p><strong>Preferred skills</strong></p>\n<ul>\n<li>Experience with Python, Java, or C++.</li>\n<li>Familiarity with cloud-based platforms such as AWS or Azure.</li>\n<li>Experience with containerisation and orchestration tools such as Docker and Kubernetes.</li>\n<li>Familiarity with machine learning and AI frameworks such as TensorFlow or PyTorch.</li>\n<li>Experience with design thinking and user-centred design tools such as Sketch or Figma.</li>\n<li>Familiarity with testing frameworks and methodologies such as JUnit or PyUnit.</li>\n<li>Experience with continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD.</li>\n<li>Familiarity with monitoring and logging tools such as Prometheus or Grafana.</li>\n<li>Experience with security frameworks and best practices such as OWASP or NIST.</li>\n<li>Familiarity with compliance and regulatory requirements such as GDPR or HIPAA.</li>\n<li>Experience with industry standards and best practices such as ISO 27001 or PCI-DSS.</li>\n</ul>\n<p><strong>Salary range</strong></p>\n<p>£80,000 - £120,000 per annum.</p>\n<p><strong>Category</strong></p>\n<p>Engineering.</p>\n<p><strong>Industry</strong></p>\n<p>Technology.</p>\n<p><strong>Experience level</strong></p>\n<p>Mid.</p>\n<p><strong>Employment type</strong></p>\n<p>Full-time.</p>\n<p><strong>Workplace type</strong></p>\n<p>Remote.</p>\n<p><strong>Required skills</strong></p>\n<ul>\n<li>Software development principles, patterns, and best practices.</li>\n<li>Agile development methodologies and version control systems.</li>\n<li>Problem-solving skills and attention to detail.</li>\n<li>Communication and collaboration skills.</li>\n<li>Cloud-based technologies and containerisation.</li>\n<li>Machine learning and AI concepts.</li>\n<li>Design thinking and user-centred design.</li>\n<li>Security principles and best practices.</li>\n<li>DevOps practices and tools.</li>\n<li>Testing frameworks and methodologies.</li>\n<li>Continuous integration and continuous deployment.</li>\n<li>Scalability and performance optimisation.</li>\n<li>Monitoring and logging tools.</li>\n<li>Containerisation and orchestration.</li>\n<li>Cloud-based storage and databases.</li>\n<li>Security frameworks and best practices.</li>\n<li>Compliance and regulatory requirements.</li>\n<li>Industry standards and best practices.</li>\n</ul>\n<p><strong>Preferred skills</strong></p>\n<ul>\n<li>Python, Java, or C++.</li>\n<li>Cloud-based platforms such as AWS or Azure.</li>\n<li>Containerisation and orchestration tools such as Docker and Kubernetes.</li>\n<li>Machine learning and AI frameworks such as TensorFlow or PyTorch.</li>\n<li>Design thinking and user-centred design tools such as Sketch or Figma.</li>\n<li>Testing frameworks and methodologies such as JUnit or PyUnit.</li>\n<li>Continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD.</li>\n<li>Monitoring and logging tools such as Prometheus or Grafana.</li>\n<li>Security frameworks and best practices such as OWASP or NIST.</li>\n<li>Compliance and regulatory requirements such as GDPR or HIPAA.</li>\n<li>Industry standards and best practices such as ISO 27001 or PCI-DSS.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6b3b4a98-297","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cursor","sameAs":"https://cursor.com","logo":"https://logos.yubhub.co/cursor.com.png"},"x-apply-url":"https://cursor.com/careers/software-engineer-enterprise","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"£80,000 - £120,000 per annum","x-skills-required":["Software development principles, patterns, and best practices","Agile development methodologies and version control systems","Problem-solving skills and attention to detail","Communication and collaboration skills","Cloud-based technologies and containerisation","Machine learning and AI concepts","Design thinking and user-centred design","Security principles and best practices","DevOps practices and tools","Testing frameworks and methodologies","Continuous integration and continuous deployment","Scalability and performance optimisation","Monitoring and logging tools","Containerisation and orchestration","Cloud-based storage and databases","Security frameworks and best practices","Compliance and regulatory requirements","Industry standards and best practices"],"x-skills-preferred":["Python, Java, or C++","Cloud-based platforms such as AWS or Azure","Containerisation and orchestration tools such as Docker and Kubernetes","Machine learning and AI frameworks such as TensorFlow or PyTorch","Design thinking and user-centred design tools such as Sketch or Figma","Testing frameworks and methodologies such as JUnit or PyUnit","Continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD","Monitoring and logging tools such as Prometheus or Grafana","Security frameworks and best practices such as OWASP or NIST","Compliance and regulatory requirements such as GDPR or HIPAA","Industry standards and best practices such as ISO 27001 or PCI-DSS"],"datePosted":"2026-03-08T00:20:06.582Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software development principles, patterns, and best practices, Agile development methodologies and version control systems, Problem-solving skills and attention to detail, Communication and collaboration skills, Cloud-based technologies and containerisation, Machine learning and AI concepts, Design thinking and user-centred design, Security principles and best practices, DevOps practices and tools, Testing frameworks and methodologies, Continuous integration and continuous deployment, Scalability and performance optimisation, Monitoring and logging tools, Containerisation and orchestration, Cloud-based storage and databases, Security frameworks and best practices, Compliance and regulatory requirements, Industry standards and best practices, Python, Java, or C++, Cloud-based platforms such as AWS or Azure, Containerisation and orchestration tools such as Docker and Kubernetes, Machine learning and AI frameworks such as TensorFlow or PyTorch, Design thinking and user-centred design tools such as Sketch or Figma, Testing frameworks and methodologies such as JUnit or PyUnit, Continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD, Monitoring and logging tools such as Prometheus or Grafana, Security frameworks and best practices such as OWASP or NIST, Compliance and regulatory requirements such as GDPR or HIPAA, Industry standards and best practices such as ISO 27001 or PCI-DSS","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_82a0bb5c-fd2"},"title":"Software Engineer, Identity Infrastructure Engineering","description":"<p><strong>Software Engineer, Identity Infrastructure Engineering</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco; New York City; Remote - US; Seattle</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>IT</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>San Francisco, Seattle or New York City $230K – $385K • Offers Equity</li>\n<li>Zone A $207K – $346.5K • Offers Equity</li>\n<li>Zone B $184K – $308K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n<li>401(k) retirement plan with employer match</li>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n<li>Mental health and wellness support</li>\n<li>Employer-paid basic life and disability coverage</li>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n<li>Relocation support for eligible employees</li>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity. The Identity Infrastructure Engineering team sits at the core of this effort, designing and building the identity and access management solutions that protect our model weights, customer data, and critical systems across multiple cloud environments. We partner with teams across OpenAI—Applied Engineering, Research, IT, and Security—to provide a secure and scalable platform for permissioning, orchestration, and innovative AI research.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Software Engineer on the Identity Infrastructure Engineering team, you’ll be instrumental in creating, deploying, and operating foundational security tools and infrastructure. You will work with a broad range of technologies to support multi-cloud deployments, ensuring that researchers and engineers can safely build, test, and scale transformative AI systems. The role requires a balance of strong technical depth, cross-functional collaboration, and a passion for embedding secure-by-default principles into every layer of our stack.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build new features for our IAM platform that seamlessly integrate with evolving cloud services, enabling teams to work efficiently while adhering to security best practices.</li>\n<li>Drive security innovation by designing tools, processes, and architectures that protect data at scale and reinforce a secure development culture across the organization.</li>\n<li>Collaborate cross-functionally with researchers, engineers, and compliance teams to address security requirements for multi-cloud deployments, large-scale model training, and emerging AI use cases.</li>\n<li>Implement and refine access policies that strike the right balance between enabling rapid experimentation and protecting high-value assets, including model weights and customer data.</li>\n<li>Troubleshoot complex identity or access issues across distributed systems, ensuring minimal downtime and a safe environment for AI research and product teams.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>A background in building secure systems—from core IAM services to orchestration layers that manage credentials, roles, or policies at scale.</li>\n<li>Proficiency in programming languages such as Python, Go, or similar, with a track record of writing high-quality, maintainable code.</li>\n<li>Experience with modern cloud infrastructure (AWS, Azure, GCP) and familiarity with industry-standard security protocols (OAuth, SAML, OpenID Connect) and authentication/authorization patterns.</li>\n<li>A security-focused mindset, with knowledge of threat modeling, risk assessment, and the ability to embed security features throughout the software development lifecycle.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with containerization (Docker, Kubernetes) and orchestration tools (e.g., Terraform, Ansible).</li>\n<li>Familiarity with CI/CD pipelines and automated testing frameworks.</li>\n<li>Knowledge of machine learning and AI concepts, including model training, deployment, and security.</li>\n<li>Experience with cloud security services (e.g., AWS IAM, Azure Active Directory).</li>\n<li>Familiarity with DevOps practices and tools (e.g., Jenkins, GitLab).</li>\n</ul>\n<p><strong>What You’ll Get</strong></p>\n<ul>\n<li>Competitive salary and equity package</li>\n<li>Comprehensive benefits package, including medical, dental, and vision insurance</li>\n<li>401(k) retirement plan with employer match</li>\n<li>Paid parental leave and medical/caregiver leave</li>\n<li>Flexible PTO and paid holidays</li>\n<li>Professional development opportunities</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p><strong>How to Apply</strong></p>\n<p>If you’re passionate about building secure systems and contributing to the development of cutting-edge AI technology, we encourage you to apply for this exciting opportunity. Please submit your resume, cover letter, and any relevant work samples or projects you’d like to share. We can’t wait to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_82a0bb5c-fd2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/551b0d0d-46c2-42fb-bb05-46e2fba8d4db","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $385K","x-skills-required":["Python","Go","AWS","Azure","GCP","OAuth","SAML","OpenID Connect","containerization","Docker","Kubernetes","Terraform","Ansible","CI/CD pipelines","automated testing frameworks","machine learning","AI concepts","model training","deployment","security","cloud security services","AWS IAM","Azure Active Directory","DevOps practices","Jenkins","GitLab"],"x-skills-preferred":["experience with containerization (Docker, Kubernetes) and orchestration tools (e.g., Terraform, Ansible)","familiarity with CI/CD pipelines and automated testing frameworks","knowledge of machine learning and AI concepts, including model training, deployment, and security","experience with cloud security services (e.g., AWS IAM, Azure Active Directory)","familiarity with DevOps practices and tools (e.g., Jenkins, GitLab)"],"datePosted":"2026-03-06T18:36:29.606Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; New York City; Remote - US; Seattle"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, AWS, Azure, GCP, OAuth, SAML, OpenID Connect, containerization, Docker, Kubernetes, Terraform, Ansible, CI/CD pipelines, automated testing frameworks, machine learning, AI concepts, model training, deployment, security, cloud security services, AWS IAM, Azure Active Directory, DevOps practices, Jenkins, GitLab, experience with containerization (Docker, Kubernetes) and orchestration tools (e.g., Terraform, Ansible), familiarity with CI/CD pipelines and automated testing frameworks, knowledge of machine learning and AI concepts, including model training, deployment, and security, experience with cloud security services (e.g., AWS IAM, Azure Active Directory), familiarity with DevOps practices and tools (e.g., Jenkins, GitLab)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4a7597fd-d7a"},"title":"Senior Data Engineer","description":"<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>\n<ul>\n<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>\n<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>\n<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Hands on experience working with Vector/Graph;Neo4j</li>\n<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4a7597fd-d7a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Hands on experience working with Vector/Graph;Neo4j","3+ years of experience in data engineering, working on AI/ML-driven data architectures"],"x-skills-preferred":["Python","SQL","Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes","Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)","Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."],"datePosted":"2026-01-01T15:49:59.491Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5eb908e-6f9"},"title":"Senior Data Engineer","description":"<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>\n<ul>\n<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>\n<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Hands on experience working with Vector/Graph;Neo4j</li>\n<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5eb908e-6f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Vector/Graph;Neo4j","data engineering","AI/ML-driven data architectures"],"x-skills-preferred":["Python","SQL","Terraform","containerization tools like Dockers","container orchestration platforms like Kubernetes","orchestration tools like Airflow or Prefect","distributed computing framework like Spark or Dask","data transformation tool like Data Build Tool (DBT)"],"datePosted":"2025-12-26T10:53:07.867Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Vector/Graph;Neo4j, data engineering, AI/ML-driven data architectures, Python, SQL, Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)"}]}