{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/scala"},"x-facet":{"type":"skill","slug":"scala","display":"Scala","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1a030d96-aef"},"title":"Head of Retention","description":"<p>As our Head of Retention, your mission is to ensure that once a user joins bunq, they stay and fall in love with our product.</p>\n<p>You will be the voice of the user, deeply understanding their life-stage needs and building a world-class, AI-driven retention strategy. Your work will deliver personalized value, foster loyalty, and turn our users into lifelong champions.</p>\n<p><strong>Take Ownership</strong></p>\n<ul>\n<li>Understand behaviour (not just opinions)</li>\n</ul>\n<ul>\n<li>Build a clear view of how users actually use bunq: frequency, depth, moments that matter, and drop-off patterns.</li>\n</ul>\n<ul>\n<li>Detect when someone is thriving (“active”) vs. slipping away (“at risk”) vs. gone (“churned”) , and why.</li>\n</ul>\n<p><strong>Make Bunq Feel Relevant to Each Life Stage</strong></p>\n<ul>\n<li>Turn life events and intent into simple, actionable personalization (not creepy, not noisy).</li>\n</ul>\n<ul>\n<li>Guide users to discover the features and benefits that match their current situation , and make them fall in love with bunq.</li>\n</ul>\n<p><strong>Communicate with Precision and Empathy</strong></p>\n<ul>\n<li>Deliver messaging that’s useful, well-timed, and segment-specific: tips, reminders, rewards, nudges, winback.</li>\n</ul>\n<ul>\n<li>Ensure every message earns attention and builds trust (tone, timing, content, frequency).</li>\n</ul>\n<p><strong>Fix Retention Leaks at the Source</strong></p>\n<ul>\n<li>Identify journey gaps and product friction that drive churn.</li>\n</ul>\n<ul>\n<li>Partner with Product, Data, Research, Support, and Operations to structurally remove issues , not patch them with more messaging.</li>\n</ul>\n<p><strong>Win Users Back</strong></p>\n<ul>\n<li>Design and run winback programs that rekindle the relationship with a clear reason to return.</li>\n</ul>\n<ul>\n<li>Learn what actually reactivates behaviour and compound it.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Build, lead, coach, and grow a 10–15 person team</li>\n</ul>\n<ul>\n<li>Set the retention strategy across lifecycle touchpoints: onboarding → activation → habit building → monetization → loyalty → winback.</li>\n</ul>\n<ul>\n<li>Own the lifecycle roadmap and run a high-velocity experimentation program (A/B testing, multivariate where useful).</li>\n</ul>\n<ul>\n<li>Build a strong measurement framework (north-star, segment KPIs, journey-level dashboards, causal thinking).</li>\n</ul>\n<ul>\n<li>Establish a tight feedback loop: quantitative behaviour + qualitative user insight (interviews, outreach, research).</li>\n</ul>\n<ul>\n<li>Proactively monitor user sentiment and emerging issues (including external signals) and drive fixes before they scale.</li>\n</ul>\n<ul>\n<li>Create scalable, AI-first systems that help navigate and orchestrate the user journey</li>\n</ul>\n<p><strong>Your Space to Perform</strong></p>\n<p>We give you the space and the tools you need to succeed</p>\n<ul>\n<li>Join forces with great colleagues across the globe to revolutionize banking</li>\n</ul>\n<ul>\n<li>Make lasting impact by working on complex &amp; exciting challenges</li>\n</ul>\n<ul>\n<li>Profit sharing based on the impact you make and bunq’s performance</li>\n</ul>\n<ul>\n<li>Great, international colleagues who share your mindset</li>\n</ul>\n<ul>\n<li>Hybrid setup: after 3 months in-office, work 2 days remote, 3 days in-office weekly.</li>\n</ul>\n<ul>\n<li>Digital Nomad program: work remotely 1 week per quarter after 1 year and 3 weeks per quarter after 2 years</li>\n</ul>\n<ul>\n<li>We support growth with bunq Academy and €1,500 annual learning budget</li>\n</ul>\n<ul>\n<li>A massive discount with Urban Sports for your wellbeing</li>\n</ul>\n<ul>\n<li>Travel expenses are covered whether you come walking or by bike, bus or car (though we prefer green choices)</li>\n</ul>\n<ul>\n<li>A MacBook so you can Get Shit Done with us</li>\n</ul>\n<ul>\n<li>Delicious lunches from our fabulous in-house chefs with vegan and vegetarian options</li>\n</ul>\n<ul>\n<li>An optional pension plan with monthly contribution from bunq</li>\n</ul>\n<ul>\n<li>Monthly contribution to your phone and internet bills</li>\n</ul>\n<ul>\n<li>Friday drinks and other celebrations - bunq style</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1a030d96-aef","directApply":true,"hiringOrganization":{"@type":"Organization","name":"bunq","sameAs":"https://careers.bunq.com","logo":"https://logos.yubhub.co/careers.bunq.com.png"},"x-apply-url":"https://careers.bunq.com/o/head-of-retention-new","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Retention strategy","AI-driven","Personalization","Segmentation","A/B testing","Multivariate testing","Causal thinking","Scalable systems","User journey orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-19T13:30:44.895Z","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Retention strategy, AI-driven, Personalization, Segmentation, A/B testing, Multivariate testing, Causal thinking, Scalable systems, User journey orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7eb7909-149"},"title":"SOS Hotline Support Guide","description":"<p>At bunq, we&#39;re not just building a bank; we&#39;re reshaping how people experience financial freedom. As an SOS Hotline Support Guide, you&#39;ll be the calm in the storm, embodying our philosophy by taking ownership, cutting through confusion, and restoring trust when it matters most.</p>\n<p>Take immediate ownership of our 24/7 SOS Hotline, acting as the first point of contact and mastering crisis de-escalation to guide users from panic to a feeling of security.</p>\n<p>Serve as the single source of truth by using strong analytical and investigative skills to troubleshoot complex problems, get to the root cause, and deliver swift, definitive resolutions.</p>\n<p>Seamlessly connect urgent user needs with our internal processes, ensuring a reassuring and efficient experience without exposing the complexity behind the scenes.</p>\n<p>Identify, analyze, and escalate systemic issues you encounter, acting as our first line of defense to protect our entire user base and prevent future problems.</p>\n<p>You will maintain unwavering composure and empathy in high-pressure, sensitive situations, acting as a stable and reassuring presence for users in distress.</p>\n<p>You are a natural at turning friction into flow. You have a background in handling sensitive and urgent issues with a calm and structured approach.</p>\n<p>You are naturally curious and adept at digging deep to uncover the root cause of an issue, not just treating the symptoms.</p>\n<p>You have a close-to-native or native command of English. Your communication is clear, concise, and empathetic.</p>\n<p>You have prior experience in a high-pressure environment like a contact center, technical support, or an incident response team, preferably within fintech or financial services.</p>\n<p>You are ready to work on a 24/7 shift schedule, including weekends, nights, and holidays, because you know that urgent situations don&#39;t stick to a 9-to-5 schedule.</p>\n<p>Your space to perform</p>\n<p>We give you the space and the tools you need to succeed.</p>\n<p>Join forces with great colleagues across the globe to revolutionise banking.</p>\n<p>Make lasting impact by working on complex &amp; exciting challenges.</p>\n<p>Accelerate your career growth with bunq Academy and a 1500 EUR annual learning budget.</p>\n<p>Flex Benefits: €70 monthly budget via Re: benefit, offering access to 150+ perks tailored to your lifestyle.</p>\n<p>A Macbook to keep with you while you&#39;re with us.</p>\n<p>Hybrid setup: after 1 month in-office, work 2 days remote, 3 days in-office weekly.</p>\n<p>Digital Nomad Program: After your first year, enjoy up to 20 days per year to work while traveling, combining flexibility with strong team collaboration.</p>\n<p>We reward tenure with a dedicated travel budget: €1.5k after 2 years.</p>\n<p>Lunch and snacks at the office, vegan options included.</p>\n<p>Private health insurance, just in case.</p>\n<p>Friday drinks, team events, and other celebrations - bunq style!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7eb7909-149","directApply":true,"hiringOrganization":{"@type":"Organization","name":"bunq","sameAs":"https://careers.bunq.com","logo":"https://logos.yubhub.co/careers.bunq.com.png"},"x-apply-url":"https://careers.bunq.com/o/sos-hotline-support-guide","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["English","Analytical skills","Investigative skills","Crisis de-escalation","Problem-solving"],"x-skills-preferred":["Prior experience in a high-pressure environment","Fintech or financial services background"],"datePosted":"2026-04-19T13:28:13.712Z","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"English, Analytical skills, Investigative skills, Crisis de-escalation, Problem-solving, Prior experience in a high-pressure environment, Fintech or financial services background"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b33cbd91-bc9"},"title":"Systematic Production Support Engineer","description":"<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>\n<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>\n<li>Implementing automated systems and processes focused on trading and operations</li>\n<li>Streamlining development and deployment processes</li>\n</ul>\n<p>Technical qualifications include:</p>\n<ul>\n<li>5+ years of development experience in Python</li>\n<li>Experience working in a Linux/Unix environment</li>\n<li>Experience working with PostgreSQL or other relational databases</li>\n</ul>\n<p>Preferred skills and experience include:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>\n<li>Experience operating and monitoring low-latency trading environments</li>\n<li>Familiarity with quantitative finance and electronic trading concepts</li>\n<li>Familiarity with financial data</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>\n<li>Experience with Apache/Confluent Kafka</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>\n<li>Experience with containerization and orchestration technologies</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>\n<li>Contributions to open-source projects</li>\n</ul>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b33cbd91-bc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954716155","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Linux/Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models","low-latency trading environments","quantitative finance","electronic trading concepts","financial data","equities","futures","FX","distributed systems","backend development","C/C++","Java","Scala","Go","C#","Apache/Confluent Kafka","SDLC pipelines","containerization","orchestration technologies","AWS","GCP","Azure"],"x-skills-preferred":["Understanding of NLP, supervised/non-supervised learning, and Generative AI models","Experience operating and monitoring low-latency trading environments","Familiarity with quantitative finance and electronic trading concepts","Familiarity with financial data","Broad understanding of equities, futures, FX, or other financial instruments","Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#","Experience with Apache/Confluent Kafka","Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)","Experience with containerization and orchestration technologies","Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure","Contributions to open-source projects"],"datePosted":"2026-04-18T22:14:36.583Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32932504-2b5"},"title":"Systematic Production Support Engineer","description":"<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>\n<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>\n<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>\n<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>\n<li>Implementation of automated systems and processes focused on trading and operations.</li>\n<li>Streamlining development and deployment processes.</li>\n<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>\n</ul>\n<p>Technical Qualification:</p>\n<ul>\n<li>5+ years of development experience in Python.</li>\n<li>Experience working in a Linux / Unix environment.</li>\n<li>Experience working with PostgreSQL or other relational databases.</li>\n<li>Ability to understand and discuss requirements from portfolio managers.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>\n<li>Experience operating and monitoring low-latency trading environments.</li>\n<li>Familiarity with quantitative finance and electronic trading concepts.</li>\n<li>Familiarity with financial data.</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>\n<li>Experience with Apache / Confluent Kafka.</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>\n<li>Experience with containerization and orchestration technologies.</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>\n<li>Contributions to open-source projects.</li>\n</ul>\n<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32932504-2b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954627501","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$100,000 to $175,000","x-skills-required":["Python","Linux / Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models"],"x-skills-preferred":["Apache / Confluent Kafka","C/C++","Java","Scala","Go","C#","containerization","orchestration technologies","AWS","GCP","Azure"],"datePosted":"2026-04-18T22:13:42.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America · Old Greenwich, Connecticut, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7275ef33-009"},"title":"Staff Data Engineer","description":"<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>\n<p>Communicating Between Technical and Non-Technical Colleagues</p>\n<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>\n<p>Data Analysis and Synthesis</p>\n<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>\n<p>Data Development Process</p>\n<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>\n<p>Data Innovation</p>\n<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>\n<p>Data Integration Design</p>\n<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>\n<p>Data Modeling</p>\n<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>\n<p>Metadata Management</p>\n<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>\n<p>Problem Resolution</p>\n<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>\n<p>Programming and Build</p>\n<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>\n<p>Technical Understanding</p>\n<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>\n<p>Testing</p>\n<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7275ef33-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976928777","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$114,400 to $171,600","x-skills-required":["Proficiency in programming language such as Python or Java","Experience with Big Data technologies such as Hadoop, Spark, and Kafka","Familiarity with ETL processes and tools","Knowledge of SQL and NoSQL databases","Strong understanding of relational databases","Experience with data warehousing solutions","Proficiency with cloud platforms","Expertise in data modeling and design","Experience in designing and building scalable data pipelines","Experience with RESTful APIs and data integration"],"x-skills-preferred":["Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)","Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field","Strong analytical and communication skills","Ability to work collaboratively in a team environment","High level of accuracy and attention to detail"],"datePosted":"2026-04-18T22:12:56.654Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b9d8b874-95f"},"title":"Manager, Global Clinical Solutions (IRT Lead)","description":"<p>Global Clinical Solutions (GCS) delivers services and technology that enable clinical development programmes to run to time, cost and quality.</p>\n<p>The Manager, GCS supports teams to improve process effectiveness and performance and provides guidance in the development and maintenance of processes, systems and services owned by GCS.</p>\n<p>This role coordinates, leads and delivers GCS services across projects and activities, ensuring operational excellence across all delivery models.</p>\n<p>It also leads and project manages improvement initiatives that strengthen how clinical development is delivered and how patients ultimately benefit from our science.</p>\n<p>As a key member of an activity team, the Manager, GCS coordinates and delivers GCS services, overseeing lifecycle management and business continuity for assigned projects, services and technologies.</p>\n<p>The role provides expert support to user communities by conducting process, system and tool training, facilitating knowledge sharing, establishing best practices and maintaining clear communication with stakeholders across GCS and AstraZeneca.</p>\n<p>It involves conducting critical analyses of processes and tools to define business usage, identifying opportunities to improve efficiency and effectiveness while reducing business continuity risks, and contributing to or developing business cases for continuous improvement projects.</p>\n<p>The Manager, GCS leads or manages business improvement projects using lean principles, including planning, prioritising, implementing and tracking delivery.</p>\n<p>Acting as a source of knowledge in one or more GCS areas, the role supports the implementation of changes that enhance how functions and teams perform.</p>\n<p>It evaluates and monitors programme performance to ensure implementation stays on target, trains colleagues in continuous improvement and new ways of working, and helps embed a culture of change.</p>\n<p>The role grows capabilities, applies new approaches to improve work, positively impacts team performance and creates learning opportunities for others.</p>\n<p>It is also responsible for knowledge management of continuous improvement activities, ensuring insights are captured and used to shape future initiatives.</p>\n<p>Ready to help transform how clinical development operates?</p>\n<p>Essential Skills/Experience:</p>\n<ul>\n<li>BS, MS, or PhD in a biological or healthcare-related field with 2+ years of relevant pharmaceutical or clinical development industry experience</li>\n</ul>\n<ul>\n<li>Ability to work collaboratively; proven organisational and analytical skills, and proven skills to deliver to time, cost and quality</li>\n</ul>\n<ul>\n<li>Good project management skills</li>\n</ul>\n<ul>\n<li>Excellent knowledge of spoken and written English</li>\n</ul>\n<ul>\n<li>Strong business communication, stakeholder management and presentation skills</li>\n</ul>\n<ul>\n<li>Well-developed organisational and interpersonal skills</li>\n</ul>\n<ul>\n<li>Ensure risks and issues management to ensure effective delivery.</li>\n</ul>\n<p>Expertly utilises escalation routes and governance to gain traction and deliver rapid solutions</p>\n<p>Share lessons learned and best practice recommendations with relevant stakeholders to drive continuous improvement</p>\n<p>Build relationships and achieve results without line management input</p>\n<p>Curious and self-motivated</p>\n<p>Desirable Skills/Experience:</p>\n<ul>\n<li>Experience utilising standard process improvement methodologies (e.g. Lean Six Sigma) to identify root causes of process issues and identify areas of process improvement</li>\n</ul>\n<ul>\n<li>Some experience with Quality Systems and Quality Management, including process definition and process improvement, ideally within an Information Systems environment</li>\n</ul>\n<ul>\n<li>Experience in multiple fields of clinical development</li>\n</ul>\n<ul>\n<li>Understanding of ICH GCP guidelines in relation to study delivery</li>\n</ul>\n<ul>\n<li>Experience working in a global organisation with complex/geographical context</li>\n</ul>\n<p>When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines.</p>\n<p>In-person working gives us the platform we need to connect, work at pace and challenge perceptions.</p>\n<p>That&#39;s why we work, on average, a minimum of three days per week from the office.</p>\n<p>But that doesn&#39;t mean we&#39;re not flexible.</p>\n<p>We balance the expectation of being in the office while respecting individual flexibility.</p>\n<p>Join us in our unique and ambitious world.</p>\n<p>AstraZeneca offers the chance to follow the science end-to-end, from early discovery through late-stage development, in an environment where digital, data science and AI are embedded into everyday work.</p>\n<p>Colleagues collaborate across disciplines and geographies to tackle complex diseases, learn from patients&#39; experiences and translate ideas into life-changing medicines for people worldwide.</p>\n<p>Continuous learning is encouraged through diverse projects, development programmes and exposure to different therapy areas, enabling meaningful careers built on curiosity, courage and scientific excellence.</p>\n<p>If this role matches your skills and ambition, apply now to help shape the future of clinical development and make a real impact for patients!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b9d8b874-95f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GCS Services","sameAs":"https://astrazeneca.eightfold.ai","logo":"https://logos.yubhub.co/astrazeneca.eightfold.ai.png"},"x-apply-url":"https://astrazeneca.eightfold.ai/careers/job/563877689867695","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["BS, MS, or PhD in a biological or healthcare-related field","2+ years of relevant pharmaceutical or clinical development industry experience","Ability to work collaboratively","Proven organisational and analytical skills","Good project management skills","Excellent knowledge of spoken and written English","Strong business communication, stakeholder management and presentation skills","Well-developed organisational and interpersonal skills","Ensure risks and issues management to ensure effective delivery","Expertly utilises escalation routes and governance to gain traction and deliver rapid solutions","Share lessons learned and best practice recommendations with relevant stakeholders to drive continuous improvement","Build relationships and achieve results without line management input","Curious and self-motivated"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:59.168Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Durham, North Carolina, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Healthcare","industry":"Pharmaceuticals","skills":"BS, MS, or PhD in a biological or healthcare-related field, 2+ years of relevant pharmaceutical or clinical development industry experience, Ability to work collaboratively, Proven organisational and analytical skills, Good project management skills, Excellent knowledge of spoken and written English, Strong business communication, stakeholder management and presentation skills, Well-developed organisational and interpersonal skills, Ensure risks and issues management to ensure effective delivery, Expertly utilises escalation routes and governance to gain traction and deliver rapid solutions, Share lessons learned and best practice recommendations with relevant stakeholders to drive continuous improvement, Build relationships and achieve results without line management input, Curious and self-motivated"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52261e57-a37"},"title":"Senior Software Engineer - Revenue Management (all genders)","description":"<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>\n<p>You&#39;ll work with modern tooling, a cross-functional team, and teammates who care deeply about impact, collaboration, and learning together.</p>\n<p>As a Senior Software Engineer - Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>\n<p>Your key responsibilities will include:</p>\n<ul>\n<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n</ul>\n<ul>\n<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n</ul>\n<ul>\n<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n</ul>\n<ul>\n<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n</ul>\n<ul>\n<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n</ul>\n<ul>\n<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>\n</ul>\n<ul>\n<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p>You don&#39;t need to meet every requirement , we&#39;re looking for strong fundamentals, ownership, and the motivation to grow.</p>\n<ul>\n<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>\n</ul>\n<ul>\n<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>\n</ul>\n<ul>\n<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>\n</ul>\n<ul>\n<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>\n</ul>\n<ul>\n<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>\n</ul>\n<ul>\n<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>\n</ul>\n<ul>\n<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>\n</ul>\n<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>\n<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>\n<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>\n<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>\n<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>\n<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52261e57-a37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2597551","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Infrastructure-as-code","Cloud platforms","ML model deployment"],"x-skills-preferred":["LLM tools and agents","Data science models","Reliable and scalable production systems"],"datePosted":"2026-04-18T22:10:23.434Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, ML model deployment, LLM tools and agents, Data science models, Reliable and scalable production systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fdbf1190-daf"},"title":"Consultant Business Systems","description":"<p>We are seeking a Consultant Business Systems to join our team in Chicago, IL. As a Consultant Business Systems, you will be responsible for IT Service Ownership for SWIFT Alliance Gateway USA, collaborating with key stakeholders to define the technology strategy that aligns with the business goals of the bank, and managing a small team of cross-functional engineers located across Buffalo and India.</p>\n<p>Your responsibilities will include driving down costs through automation, less TOIL and efficient use of Infrastructure, evaluating the SWIFT product and services from functional and non-functional perspective, and delivering change program as per the annual Book of Work. You will also be responsible for collecting resource and cost estimates, establishing baseline budgets, and securing approvals from the Program Steering Committee, tracking expenses against approved budgets and providing monthly progress reports to stakeholders, conducting revenue projections to support business cases, identifying cost savings, and optimizing resource allocation.</p>\n<p>Additionally, you will be responsible for the budget management and reporting for SWIFT Alliance Gateway USA, identifying and managing risks, dependencies, and compliance requirements throughout the service lifecycle, ensuring all functional and regulatory standards are met. You will perform periodic risk assessments, enforce HSBC&#39;s Enterprise and Operational Risk Management Frameworks, and maintain robust control documentation.</p>\n<p>You will participate in audits and implement all security and data privacy controls as per compliance standards, leading program governance forums, project kickoff meetings, and project status updates, and facilitating cross-team collaboration. You will define and execute a detailed communication plan, including escalation protocols and risk communication, maintaining strong working relationships with cross-functional teams, vendors, and resource managers, ensuring alignment with project goals.</p>\n<p>This is a full-time position, Monday-Friday, 40 hours per week, with telecommuting permitted up to 100% from anywhere in the US.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fdbf1190-daf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC Technology & Services (USA) Inc.","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610161979","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$153,317.00 to $163,317.00 per year","x-skills-required":["Cross border high value payment processing systems and methodologies","Configuring and managing SWIFT Products, including SWIFT Alliance Gateway (SAG), SWIFT Net Link (SNL), and Hardware Security Module (HSM)","Java, IBM MQ, and WebSphere Application Server (WAS)","Production support for SWIFT products","IT Service Ownership for Mission Critical Payment Systems, managing the safety, security, resilience, and availability of IT services","Backlog prioritization, escalation processes, and risk management for US SWIFT Applications","Vendor contract management for critical financial market utilities, specifically with SWIFT, including risk management, relationship oversight, and ongoing contract monitoring","Financial management for SWIFT projects, including providing resource estimates, coordinating with global teams for resource allocation, reviewing and approving timesheets, and analyzing monthly finance reports to manage budget consumption","Project management for SWIFT projects, including creating project plans and milestones, generating weekly status reports, tracking project RAG status, and developing remediation plans to address issues and keep projects on track","Jenkins and GitHub, including automating deployment processes for SWIFT services, managing software upgrades, and ensuring proper version control of software artifacts in GitHub","Agile methodologies and running projects in sprints using JIRA and Confluence"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:09:05.123Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chicago, IL"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"Cross border high value payment processing systems and methodologies, Configuring and managing SWIFT Products, including SWIFT Alliance Gateway (SAG), SWIFT Net Link (SNL), and Hardware Security Module (HSM), Java, IBM MQ, and WebSphere Application Server (WAS), Production support for SWIFT products, IT Service Ownership for Mission Critical Payment Systems, managing the safety, security, resilience, and availability of IT services, Backlog prioritization, escalation processes, and risk management for US SWIFT Applications, Vendor contract management for critical financial market utilities, specifically with SWIFT, including risk management, relationship oversight, and ongoing contract monitoring, Financial management for SWIFT projects, including providing resource estimates, coordinating with global teams for resource allocation, reviewing and approving timesheets, and analyzing monthly finance reports to manage budget consumption, Project management for SWIFT projects, including creating project plans and milestones, generating weekly status reports, tracking project RAG status, and developing remediation plans to address issues and keep projects on track, Jenkins and GitHub, including automating deployment processes for SWIFT services, managing software upgrades, and ensuring proper version control of software artifacts in GitHub, Agile methodologies and running projects in sprints using JIRA and Confluence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":153317,"maxValue":163317,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22fad646-4f4"},"title":"Client Solution Architect","description":"<p>In compliance with applicable laws, HSBC is committed to employing only those who are authorised to work in the US. Our purpose is to open up a world of opportunity by using our unique expertise, capabilities, breadth and perspectives to open up new kinds of opportunity for our customers.</p>\n<p>The Client Solution Architect role within the Client Solutions capability in Global Client Connectivity plays a critical role in bridging the gap between sales and delivery, ensuring clients receive tailored, technically robust global payment solutions, with seamless implementation of complex or newly commercialised product capabilities.</p>\n<p>As a Client Solution Architect, you will be responsible for delivering technical and product solutions to top-tier clients across Global Payments Solutions (GPS) connectivity products. You will lead multiple global, complex projects to onboard clients to Client Connectivity channels, including HSBC net, APIs, SWIFT connectivity, and Host-to-Host (H2H) / file-based connectivity.</p>\n<p>You will also lead technical client workshops to understand current-state processes and support automation goals across Accounts Payable, Receivables, Treasury, Payroll, and bank statements/reconciliation integration. You will provide pre-mandate technical support for Request for Information / Request for Proposals (RFI/RFPs) and solution proposals.</p>\n<p>You will deliver integrated solutions that streamline workflows and optimise client Enterprise Resource Planning / Talent Management System (ERP/TMS) usage, proactively identify technical blockers, and drive resolution to protect timelines and client outcomes.</p>\n<p>You will finalise product and technical solutions post-mandate, documenting the approach in a statement of work; partner with project management to build plans, identify risks, and manage dependencies.</p>\n<p>You will communicate effectively with stakeholders at country, regional and global levels; act as liaison across Client Service, Sales/Coverage, Product and Information Technology (IT).</p>\n<p>You will manage a varied portfolio of complex, global integration initiatives, including piloting new GPS products/services.</p>\n<p>You will simplify and challenge existing processes, contribute to continuous improvement and client experience outcomes, and provide feedback to product teams on client requirements and competitive landscape to drive enhancements.</p>\n<p>You will maintain an in-depth understanding of the financial and technical environment of clients&#39; businesses and stay current on industry trends and payment regulation changes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22fad646-4f4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610513366","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Deep experience in client-facing solutioning, implementation, integration, or technical consulting within banking/payments, treasury, or financial technology","Strong client-facing experience with the ability to translate client needs into practical, scalable connectivity solutions","Excellent communication and collaboration skills, with the ability to influence and align stakeholders across Sales/Coverage, Product, IT, Client Service and external client teams","Strong understanding of banking and payments, including ACH, wires, and real-time/instant payments","Strong knowledge of ISO 20022 standards and practical experience with payment/reporting message formats"],"x-skills-preferred":["Experience working with ERPs such as SAP, Oracle, NetSuite, Microsoft Dynamics; familiarity with TMS platforms such as Kyriba, GTreasury, FIS Quantum, Integrity, SWIFT and industry changes (including MT to MX migration impacts)","Proven ability to manage multiple complex workstreams, prioritise effectively, and operate as an escalation point/role model","Additional languages are an asset; experience with treasury centralisation/standardisation and change management initiatives is a plus"],"datePosted":"2026-04-18T22:08:53.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Buffalo, New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Deep experience in client-facing solutioning, implementation, integration, or technical consulting within banking/payments, treasury, or financial technology, Strong client-facing experience with the ability to translate client needs into practical, scalable connectivity solutions, Excellent communication and collaboration skills, with the ability to influence and align stakeholders across Sales/Coverage, Product, IT, Client Service and external client teams, Strong understanding of banking and payments, including ACH, wires, and real-time/instant payments, Strong knowledge of ISO 20022 standards and practical experience with payment/reporting message formats, Experience working with ERPs such as SAP, Oracle, NetSuite, Microsoft Dynamics; familiarity with TMS platforms such as Kyriba, GTreasury, FIS Quantum, Integrity, SWIFT and industry changes (including MT to MX migration impacts), Proven ability to manage multiple complex workstreams, prioritise effectively, and operate as an escalation point/role model, Additional languages are an asset; experience with treasury centralisation/standardisation and change management initiatives is a plus"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bcc033f-15c"},"title":"GenAI Strategic Projects Lead, Public Sector","description":"<p>We&#39;re seeking a GenAI Strategic Projects Lead to own high-impact projects that drive revenue and experimentation. In this role, you&#39;ll work across operations, engineering, and customer engagement to produce world-class training and test and evaluation data for Large Language Models for our Public Sector customers.</p>\n<p>This role offers a rare opportunity to make a meaningful impact at the intersection of AI and national security. You will help build Generative AI data-labeling pipelines from the ground up, create operational processes to manage and optimize an in-house expert data workforce, and develop novel technology-driven approaches (e.g., scripts, prompt engineering, hybrid data) to improve the quality of our training and evaluation datasets.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop, build, and maintain the infrastructure required to ensure data pipelines are efficient, scalable, and produce high-quality outputs</li>\n<li>Take ownership of day-to-day progress on high-priority data production pipelines, ensuring projects move forward efficiently</li>\n<li>Partner with subject matter experts in their fields to validate the quality of our data and to translate deep domain knowledge into scalable processes and measurable outcomes</li>\n<li>Work closely with customers to understand their requirements and design data taxonomies that optimize model performance</li>\n<li>Utilize analytics and data visualization tools to track progress, identify bottlenecks, and make data-driven decisions to optimize pipeline performance</li>\n<li>Influence cross-org collaboration to define and advance human data strategy, influencing technical and non-technical stakeholders to ensure data quality, scalability, and long-term platform leverage</li>\n<li>Own larger and larger components of our data delivery processes, until you ultimately serve as the full owner of our most visible and high impact customer pipelines</li>\n</ul>\n<p>You have:</p>\n<ul>\n<li>5+ years of experience in product development, data science, or operations</li>\n<li>A history of successful project management and comfort in ambiguity</li>\n<li>Ability to analyze complex operational data, build queries, and identify trends to inform decisions and optimize processes</li>\n<li>Technical aptitude to understand how to produce data for state of the art post-training techniques such as supervised fine tuning (SFT), reinforcement learning through human feedback (RLHF), Reinforcement Learning with Verifiable Rewards (RLVR) etc</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience working in defense tech and/or an AI company</li>\n<li>A technical degree in fields like computer science, data science, or engineering</li>\n<li>A deep understanding of ML operations for generative AI workflows / products</li>\n<li>An active Top Secret security clearance</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the location of Washington DC is: $169,600-$212,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bcc033f-15c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4648363005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$169,600-$212,000 USD","x-skills-required":["product development","data science","operations","project management","complex operational data analysis","data visualization tools","cross-org collaboration","human data strategy","data quality","scalability","long-term platform leverage"],"x-skills-preferred":["defense tech","AI company","computer science","engineering","ML operations","generative AI workflows","Top Secret security clearance"],"datePosted":"2026-04-18T16:01:47.128Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product development, data science, operations, project management, complex operational data analysis, data visualization tools, cross-org collaboration, human data strategy, data quality, scalability, long-term platform leverage, defense tech, AI company, computer science, engineering, ML operations, generative AI workflows, Top Secret security clearance","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":169600,"maxValue":212000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_978310df-422"},"title":"Staff FullStack Software Engineer, (Forward Deployed), GPS","description":"<p>We&#39;re seeking a Full Stack Software Engineer to join our International Public Sector team. As a Full Stack Software Engineer, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>\n<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>\n<p>You will serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation.</p>\n<p>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes.</p>\n<p>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments.</p>\n<p>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases.</p>\n<p>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture.</p>\n<p>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team.</p>\n<p>Ideally, you&#39;d have a Master&#39;s or PhD in Computer Science or equivalent deep industry experience in architecting complex, distributed systems.</p>\n<p>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP).</p>\n<p>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability.</p>\n<p>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments.</p>\n<p>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization.</p>\n<p>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction.</p>\n<p>Proficient in Arabic.</p>\n<p>Nice to haves include past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role, experience working cross functionally with operations, and a proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_978310df-422","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4673314005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Node.js","React","Kubernetes","Cloud infrastructure","AI","LLMs","Cloud computing","Security","Scalability","Distributed systems"],"x-skills-preferred":["Arabic","Startup experience","CTO experience","Founding engineer experience","Forward deployed engineer experience","Customer engineer experience","Operations experience","LLM-driven solutions"],"datePosted":"2026-04-18T16:01:27.211Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, LLMs, Cloud computing, Security, Scalability, Distributed systems, Arabic, Startup experience, CTO experience, Founding engineer experience, Forward deployed engineer experience, Customer engineer experience, Operations experience, LLM-driven solutions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7a3c1d3f-0e2"},"title":"Head of IT SOX","description":"<p>We are seeking a Head of IT SOX to join our Internal Audit SOX team at Anthropic. As the Head of IT SOX, you will lead the organisation&#39;s IT SOX compliance program, with a primary focus on IT General Controls (ITGCs), application controls, and system/process risk assessments.</p>\n<p>In this role, you will work cross-functionally with Engineering, Security, IT, DevOps, and Finance to ensure the organisation meets SOX 404 compliance requirements in a rapidly scaling, technology-driven environment.</p>\n<p>This is a unique opportunity to build IT SOX controls at an AI-first company, leveraging cutting-edge AI technology to create innovative, automated, and scalable compliance solutions.</p>\n<p>You will help define how AI can transform traditional SOX processes,from continuous monitoring to intelligent risk assessment,while maintaining the rigor required for public company compliance.</p>\n<p>As the Head of IT SOX, you will own SOX IT planning, scoping, testing, remediation, and reporting activities. You&#39;ll work directly with technical partners to design and implement scalable controls, oversee documentation, and manage communication with external auditors.</p>\n<p>This role reports to the Head of Internal Audit and plays a critical part in strengthening internal control maturity as the company scales through pre-IPO readiness and longer term as a public company.</p>\n<p>Responsibilities:</p>\n<p>SOX IT Program Leadership</p>\n<ul>\n<li>Lead and manage the organisation&#39;s end-to-end IT SOX compliance program</li>\n</ul>\n<ul>\n<li>Own SOX IT planning, scoping, testing, remediation, and reporting activities</li>\n</ul>\n<ul>\n<li>Build scalable, automated, and sustainable controls to support growth through pre-IPO and post-IPO readiness</li>\n</ul>\n<ul>\n<li>Develop and maintain the SOX IT compliance roadmap aligned with organisational growth</li>\n</ul>\n<ul>\n<li>Pioneer the use of AI and automation technologies to enhance control effectiveness, continuous monitoring, and risk detection</li>\n</ul>\n<ul>\n<li>Drive IT controls rationalisation initiatives to optimise the control environment and increase reliance on IT automated controls (ITACs)</li>\n</ul>\n<p>ITGC and Application Controls</p>\n<ul>\n<li>Design, implement, and monitor IT General Controls (ITGCs) across critical systems</li>\n</ul>\n<ul>\n<li>Evaluate and test application controls and IT automated controls (ITACs) to ensure proper functionality and compliance</li>\n</ul>\n<ul>\n<li>Conduct system and process risk assessments to identify control gaps and remediation needs</li>\n</ul>\n<ul>\n<li>Oversee control documentation and ensure audit-ready evidence is maintained</li>\n</ul>\n<ul>\n<li>Assess and monitor Systems Development Life Cycle (SDLC) controls for new system implementations and changes</li>\n</ul>\n<p>Cross-Functional Partnership</p>\n<ul>\n<li>Partner with Engineering, Security, IT, DevOps, and Finance teams to implement scalable controls</li>\n</ul>\n<ul>\n<li>Work directly with technical partners to design controls that align with business operations</li>\n</ul>\n<ul>\n<li>Collaborate with process owners to identify control improvements and automation opportunities</li>\n</ul>\n<ul>\n<li>Support SEC cybersecurity disclosure requirements and ongoing monitoring of cyber risks</li>\n</ul>\n<p>External Audit Management</p>\n<ul>\n<li>Serve as the primary point of contact for external auditors on IT SOX matters</li>\n</ul>\n<ul>\n<li>Manage audit requests, coordinate testing schedules, and facilitate audit walkthroughs</li>\n</ul>\n<ul>\n<li>Track and report on IT SOX compliance status to leadership, the Board, and Audit Committee</li>\n</ul>\n<p>If you have 10+ years of hands-on IT audit and SOX compliance experience, preferably in both Big 4 and in-house internal audit/SOX leadership roles at a fast-paced technology company, you may be a good fit for this role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7a3c1d3f-0e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5061691008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$360,000 USD","x-skills-required":["IT General Controls (ITGCs)","application controls","system/process risk assessments","SOX 404 compliance","AI technology","automated and scalable compliance solutions","continuous monitoring","intelligent risk assessment","public company compliance","SOX IT planning","scoping","testing","remediation","reporting activities","scalable controls","documentation","communication with external auditors","internal control maturity","pre-IPO readiness","post-IPO readiness","IT controls rationalisation","IT automated controls (ITACs)","Systems Development Life Cycle (SDLC) controls","cybersecurity disclosure requirements","cyber risks"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:01:05.719Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"IT General Controls (ITGCs), application controls, system/process risk assessments, SOX 404 compliance, AI technology, automated and scalable compliance solutions, continuous monitoring, intelligent risk assessment, public company compliance, SOX IT planning, scoping, testing, remediation, reporting activities, scalable controls, documentation, communication with external auditors, internal control maturity, pre-IPO readiness, post-IPO readiness, IT controls rationalisation, IT automated controls (ITACs), Systems Development Life Cycle (SDLC) controls, cybersecurity disclosure requirements, cyber risks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":360000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4aa672e2-c8c"},"title":"Marketing Events Content Manager","description":"<p>As a Marketing Events Content Manager at Anthropic, you will own the content development and execution for our 1P events and experiences. This role requires 8+ years of experience in content marketing, event content development, or a related field, ideally within technology or B2B environments. You will develop compelling narratives, presentations, speaker content, and supporting materials that bring our events to life and ensure every touchpoint authentically communicates Anthropic&#39;s mission and Claude&#39;s capabilities.</p>\n<p>In this role, you&#39;ll be the connective tissue between our event strategy and the content that makes each experience resonate. You&#39;ll develop everything from keynote narratives and session abstracts to speaker preparation materials and post-event content, ensuring consistency and quality across Anthropic-owned events like Code with Claude, Anthropic Futures Forum, and our industry-specific programs.</p>\n<p>This is an ideal opportunity for someone who thrives at the intersection of storytelling, event marketing, and program management,and who can translate complex AI concepts into accessible, engaging content for diverse audiences.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own the end-to-end content strategy and development for Anthropic&#39;s core marketing events, ensuring alignment with event objectives, brand standards, and company OKRs</li>\n</ul>\n<ul>\n<li>Develop compelling keynote narratives, session descriptions, speaker talking points, and presentation content that showcase Anthropic&#39;s products, research, and customer impact</li>\n</ul>\n<ul>\n<li>Create and manage speaker preparation materials, including briefing documents, rehearsal guides, and &#39;Know Before You Go&#39; content for internal and external speakers</li>\n</ul>\n<ul>\n<li>Write and produce event marketing content across channels,including email campaigns, landing pages, social copy, and promotional materials,in partnership with broader marketing teams</li>\n</ul>\n<ul>\n<li>Build and maintain event content templates, toolkits, and best practices that can scale across a growing global events calendar</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally with product marketing, communications, developer relations, and sales teams to source stories, technical content, and customer narratives for event programming</li>\n</ul>\n<ul>\n<li>Manage content timelines and deliverables across multiple concurrent events, ensuring all materials meet quality standards and deadlines</li>\n</ul>\n<ul>\n<li>Develop post-event content including recap materials, highlight packages, and follow-up communications that extend the impact of each event</li>\n</ul>\n<ul>\n<li>Own the development of event programming and agendas, leading topic ideation, session sequencing, and content-mix decisions that create cohesive, engaging event experiences , balancing technical depth, audience diversity, and narrative arc for audiences including developers, enterprise leaders, startups, and partners.</li>\n</ul>\n<ul>\n<li>Identify, pitch, and secure external speakers for Anthropic events , including journalists, industry thought leaders, subject matter experts, and academics , managing the full outreach process from prospecting through confirmation and contracting</li>\n</ul>\n<ul>\n<li>Build and maintain long-term relationships with a diverse external speaker pipeline, positioning Anthropic events as a premier destination for top voices in AI and establishing ongoing partnerships that drive recurring speaker engagement across our events calendar</li>\n</ul>\n<ul>\n<li>Track content performance metrics and audience engagement to continuously refine event content strategy</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 8+ years of experience in content marketing, event content development, or a related field, ideally within technology or B2B environments</li>\n</ul>\n<ul>\n<li>Have a demonstrated ability to develop compelling narratives and presentations for live events, conferences, or executive communications</li>\n</ul>\n<ul>\n<li>Are an exceptional writer who can distill complex technical concepts into clear, engaging content for varied audiences,from developers to C-suite executives</li>\n</ul>\n<ul>\n<li>Have experience managing speaker preparation and content development workflows for multi-session events or conferences</li>\n</ul>\n<ul>\n<li>Have experience sourcing and securing external speakers or contributors, including crafting compelling outreach and navigating relationships with journalists, thought leaders, subject matter experts, or academics.</li>\n</ul>\n<ul>\n<li>Bring strong relationship-building skills , you&#39;re comfortable being a face of Anthropic&#39;s event program and cultivating long-term, mutually valuable partnerships with external collaborators.</li>\n</ul>\n<ul>\n<li>Are highly organized with the ability to manage multiple content workstreams simultaneously while maintaining high quality standards</li>\n</ul>\n<ul>\n<li>Have strong collaboration skills and experience working cross-functionally with product, engineering, sales, and creative teams to produce content</li>\n</ul>\n<ul>\n<li>Are comfortable working in a fast-paced, high-growth environment where priorities can shift quickly and scrappiness is valued</li>\n</ul>\n<ul>\n<li>Have a genuine interest in AI technology and are excited to learn about Anthropic&#39;s products and research to inform authentic event storytelling</li>\n</ul>\n<ul>\n<li>Are results-oriented with a bias toward action,you can develop a content plan and execute it, iterating quickly based on feedback</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience in event content for technology companies, particularly in AI, cloud, or enterprise software</li>\n</ul>\n<ul>\n<li>Background in producing content for both B2B and B2C audiences across event formats (keynotes, workshops, demos, networking experiences)</li>\n</ul>\n<ul>\n<li>An established professional network spanning journalists, thought leaders, academics, or subject matter experts in AI or adjacent fields</li>\n</ul>\n<ul>\n<li>Experience building and managing a recurring speaker pipeline for a growing events program, including strategies for speaker retention and long-term re-engagement.</li>\n</ul>\n<ul>\n<li>Familiarity with event marketing tools and platforms for content delivery and attendee engagement</li>\n</ul>\n<ul>\n<li>Experience developing content strategies that demonstrably contributed to pipeline generation or brand awareness goals</li>\n</ul>\n<ul>\n<li>Comfort working with technical subject matter experts and translating their insights into polished event content</li>\n</ul>\n<ul>\n<li>Experience building scalable content frameworks or toolkits for growing event programs</li>\n</ul>\n<ul>\n<li>A portfolio that demonstrates range across event content types,from executive-level presentations to hands-on workshop materials</li>\n</ul>\n<ul>\n<li>Deadline to apply: None. Applications will be reviewed on a rolling basis.</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $200,000-$255,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4aa672e2-c8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://anthropic.ai/","logo":"https://logos.yubhub.co/anthropic.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5100613008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,000-$255,000 USD","x-skills-required":["Content marketing","Event content development","Storytelling","Program management","Event strategy","Brand standards","OKRs","Keynote narratives","Session descriptions","Speaker talking points","Presentation content","Speaker preparation materials","Briefing documents","Rehearsal guides","Know Before You Go content","Event marketing content","Email campaigns","Landing pages","Social copy","Promotional materials","Event content templates","Toolkits","Best practices","Cross-functional collaboration","Product marketing","Communications","Developer relations","Sales teams","Content timelines","Deliverables","Quality standards","Deadlines","Post-event content","Recap materials","Highlight packages","Follow-up communications","Event programming","Agendas","Topic ideation","Session sequencing","Content-mix decisions","Technical depth","Audience diversity","Narrative arc","External speakers","Journalists","Industry thought leaders","Subject matter experts","Academics","Long-term relationships","Speaker pipeline","Top voices in AI","Ongoing partnerships","Recurring speaker engagement","Content performance metrics","Audience engagement","Refine event content strategy"],"x-skills-preferred":["Experience in event content for technology companies","Background in producing content for both B2B and B2C audiences","Established professional network spanning journalists, thought leaders, academics, or subject matter experts in AI or adjacent fields","Familiarity with event marketing tools and platforms for content delivery and attendee engagement","Comfort working with technical subject matter experts and translating their insights into polished event content","Experience building scalable content frameworks or toolkits for growing event programs","A portfolio that demonstrates range across event content types"],"datePosted":"2026-04-18T16:00:13.433Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Technology","skills":"Content marketing, Event content development, Storytelling, Program management, Event strategy, Brand standards, OKRs, Keynote narratives, Session descriptions, Speaker talking points, Presentation content, Speaker preparation materials, Briefing documents, Rehearsal guides, Know Before You Go content, Event marketing content, Email campaigns, Landing pages, Social copy, Promotional materials, Event content templates, Toolkits, Best practices, Cross-functional collaboration, Product marketing, Communications, Developer relations, Sales teams, Content timelines, Deliverables, Quality standards, Deadlines, Post-event content, Recap materials, Highlight packages, Follow-up communications, Event programming, Agendas, Topic ideation, Session sequencing, Content-mix decisions, Technical depth, Audience diversity, Narrative arc, External speakers, Journalists, Industry thought leaders, Subject matter experts, Academics, Long-term relationships, Speaker pipeline, Top voices in AI, Ongoing partnerships, Recurring speaker engagement, Content performance metrics, Audience engagement, Refine event content strategy, Experience in event content for technology companies, Background in producing content for both B2B and B2C audiences, Established professional network spanning journalists, thought leaders, academics, or subject matter experts in AI or adjacent fields, Familiarity with event marketing tools and platforms for content delivery and attendee engagement, Comfort working with technical subject matter experts and translating their insights into polished event content, Experience building scalable content frameworks or toolkits for growing event programs, A portfolio that demonstrates range across event content types","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":255000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_853e1417-019"},"title":"Solutions Architect, Applied AI (National Security)","description":"<p>As a Solutions Architect, Applied AI (National Security), you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping national security and defense agencies understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack.</p>\n<p>You will combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex mission challenges while maintaining our high standards for safety and reliability.</p>\n<p>Working closely with our Sales, Product, and Engineering teams, you&#39;ll guide customers from initial technical discovery through successful deployment. You&#39;ll leverage your expertise to help customers understand Claude&#39;s capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation</li>\n</ul>\n<ul>\n<li>Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams &amp; stakeholders to drive customer success</li>\n</ul>\n<ul>\n<li>Support customers building with Claude Code, the Claude API, and Claude for Enterprise</li>\n</ul>\n<ul>\n<li>Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering &amp; development teams up to business value focused conversations with executives</li>\n</ul>\n<ul>\n<li>Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack</li>\n</ul>\n<ul>\n<li>Help customers develop evaluation frameworks to measure Claude&#39;s performance for their specific use cases</li>\n</ul>\n<ul>\n<li>Identify common integration patterns and contribute insights back to our Product and Engineering teams</li>\n</ul>\n<ul>\n<li>Travel frequently to customer sites for workshops, technical deep dives, and relationship building</li>\n</ul>\n<ul>\n<li>Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>TS/SCI clearance required</li>\n</ul>\n<ul>\n<li>Must have prior experience working with US national security (defense and/or intelligence) agencies</li>\n</ul>\n<ul>\n<li>5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager</li>\n</ul>\n<ul>\n<li>Experience navigating complex buying cycles involving multiple stakeholders</li>\n</ul>\n<ul>\n<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>\n</ul>\n<ul>\n<li>Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders</li>\n</ul>\n<ul>\n<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>\n</ul>\n<ul>\n<li>Familiar with Python</li>\n</ul>\n<ul>\n<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>\n</ul>\n<ul>\n<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>A love of teaching, mentoring, and helping others succeed</li>\n</ul>\n<ul>\n<li>Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>\n</ul>\n<p>The annual compensation range for this role is $240,000-$270,000 USD.</p>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p>How we&#39;re different:</p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come work with us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_853e1417-019","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5079511008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$240,000-$270,000 USD","x-skills-required":["TS/SCI clearance","Prior experience working with US national security (defense and/or intelligence) agencies","Technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager","Experience navigating complex buying cycles involving multiple stakeholders","Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders","Experience designing scalable cloud architectures and integrating with enterprise systems","Familiar with Python","Familiarity with common LLM frameworks and tools or a background in machine learning or data science"],"x-skills-preferred":["Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more","Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities","A love of teaching, mentoring, and helping others succeed","Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders","Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems"],"datePosted":"2026-04-18T15:59:41.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TS/SCI clearance, Prior experience working with US national security (defense and/or intelligence) agencies, Technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager, Experience navigating complex buying cycles involving multiple stakeholders, Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders, Experience designing scalable cloud architectures and integrating with enterprise systems, Familiar with Python, Familiarity with common LLM frameworks and tools or a background in machine learning or data science, Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more, Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities, A love of teaching, mentoring, and helping others succeed, Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders, Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6cae1ee9-b93"},"title":"Senior Technical Solutions Engineer (Platform)","description":"<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>\n<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>\n<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Troubleshoot and resolve complex customer issues related to Databricks platform</li>\n<li>Provide best practices support for custom-built solutions developed by Databricks customers</li>\n<li>Deliver suggestions for improving performance in customer-specific environments</li>\n<li>Assist with issues around third-party integrations with Databricks environment</li>\n<li>Demonstrate and coordinate with engineering and escalation teams to achieve resolution of customer issues and requests</li>\n<li>Participate in the creation and maintenance of company documentation and knowledge articles</li>\n<li>Be a true proponent of customer advocacy</li>\n<li>Strengthen your AWS/Azure and Databricks platform expertise through learning and internal training programs</li>\n<li>Participate in weekend and weekday on call rotation</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years experience designing, building, testing, and maintaining Python/Java/Scala based applications</li>\n<li>Expert level knowledge in python is desired</li>\n<li>Strong experience with SQL-based database is required</li>\n<li>Linux/Unix administration skills</li>\n<li>Hands-on experience with AWS, Azure or GCP</li>\n<li>Experience with &quot;Distributed Big Data Computing&quot; environment</li>\n<li>Technical degree or the equivalent experience</li>\n<li>Written and spoken proficiency in both Japanese and English</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6cae1ee9-b93","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8488552002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","SQL","Linux/Unix","AWS","Azure","GCP","Distributed Big Data Computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:28.244Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, SQL, Linux/Unix, AWS, Azure, GCP, Distributed Big Data Computing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f7a6445e-87f"},"title":"Head of Partner Success","description":"<p>Job Title: Head of Partner Success</p>\n<p>About the Role:</p>\n<p>We&#39;re hiring the first leader of our Partner Success team, which will pick the best consulting and systems integration firms to partner with and make them great. You will build Partner Success at Anthropic from scratch, hiring your first partner success managers, defining how they run their portfolios, and personally carrying a portfolio of partners yourself as your reference implementation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and lead the Partner Success team from scratch.</li>\n<li>Decide which partners get the team&#39;s attention.</li>\n<li>Design the team&#39;s engagement model.</li>\n<li>Run the joint planning and business review cadence with each managed partner.</li>\n<li>Drive scalable enablement across the managed partner book.</li>\n<li>Drive adoption, retention, and expansion in the joint customer book.</li>\n<li>Drive industry specialization across the managed partner book.</li>\n<li>Steward co-investment funding decisions.</li>\n<li>Interlock with Anthropic&#39;s direct sales field and with the alliances organization that owns the executive relationship with each strategic partner.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Six to ten years of experience working with consulting and systems integration partners at a software company, cloud platform, or partner-led business, with at least two of those years managing a team.</li>\n<li>Built or scaled a partner-facing team from scratch before.</li>\n<li>Deep understanding of how partner success works in a usage-based business.</li>\n<li>Strong commercial instincts on partner selection and co-investment funding.</li>\n<li>Experience running scalable partner or practitioner enablement.</li>\n<li>Enough technical fluency to be credible in an architecture review.</li>\n</ul>\n<p>Annual compensation range for this role is $300,000-$355,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f7a6445e-87f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5182866008","x-work-arrangement":"onsite","x-experience-level":"mid|senior","x-job-type":"full-time","x-salary-range":"$300,000-$355,000 USD","x-skills-required":["partner success","team management","commercial instincts","technical fluency","scalable enablement"],"x-skills-preferred":["large language models","partner relationship management","co-investment funding"],"datePosted":"2026-04-18T15:59:18.747Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"partner success, team management, commercial instincts, technical fluency, scalable enablement, large language models, partner relationship management, co-investment funding","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":355000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_60a7e1e6-b51"},"title":"Tech Lead/Manager, Machine Learning Research Scientist- LLM Evals","description":"<p>As the leading data and evaluation partner for frontier AI companies, we&#39;re dedicated to advancing the evaluation and benchmarking of large language models (LLMs). Our Research teams work with the industry&#39;s leading AI labs to provide high-quality data and accelerate progress in GenAI research.</p>\n<p>We&#39;re seeking a Tech Lead Manager to lead a talented team of research scientists and research engineers focused on developing and implementing novel evaluation methodologies, metrics, and benchmarks to assess the capabilities and limitations of our cutting-edge LLMs.</p>\n<p>Key responsibilities:</p>\n<ul>\n<li>Lead a team of highly effective research scientists and research engineers on LLM evals.</li>\n<li>Conduct research on the effectiveness and limitations of existing LLM evaluation techniques.</li>\n<li>Design and develop novel evaluation benchmarks for large language models, covering areas such as instruction following, factuality, robustness, and fairness.</li>\n<li>Communicate, collaborate, and build relationships with clients and peer teams to facilitate cross-functional projects.</li>\n<li>Collaborate with internal teams and external partners to refine metrics and create standardized evaluation protocols.</li>\n<li>Implement scalable and reproducible evaluation pipelines using modern ML frameworks.</li>\n<li>Publish research findings in top-tier AI conferences and contribute to open-source benchmarking initiatives.</li>\n</ul>\n<p>Ideal candidate has 5+ years of hands-on experience in large language model, NLP, and Transformer modeling, in the setting of both research and engineering development. Experience supporting and leading a team of research scientists and research engineers is also required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_60a7e1e6-b51","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4304790005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$264,800-$331,000 USD","x-skills-required":["large language model","NLP","Transformer modeling","research and engineering development","team leadership","cross-functional collaboration","evaluation methodologies","metrics and benchmarks","scalable and reproducible evaluation pipelines","modern ML frameworks"],"x-skills-preferred":["published research in top-tier AI conferences","open-source benchmarking initiatives","customer-facing role"],"datePosted":"2026-04-18T15:59:10.794Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; Seattle, WA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large language model, NLP, Transformer modeling, research and engineering development, team leadership, cross-functional collaboration, evaluation methodologies, metrics and benchmarks, scalable and reproducible evaluation pipelines, modern ML frameworks, published research in top-tier AI conferences, open-source benchmarking initiatives, customer-facing role","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":264800,"maxValue":331000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5d71bfd7-723"},"title":"Partner Solutions Architect, Applied AI","description":"<p>As a Partner Solutions Architect on the Applied AI team at Anthropic, you will be a Pre-Sales architect focused on cultivating technical relationships with our Global and Regional System Integrators (GSIs/RSIs), and our cloud partners (AWS and GCP).</p>\n<p>You will strengthen our relationships with key partners to accelerate indirect revenue, enable their AI practices, and execute on long-term GTM strategy.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Strategic Technical Partnership: Be a technical thought partner to the Anthropic GTM partnerships team, providing technical expertise to better understand the partner landscape, driving key strategic programs, and identifying opportunities to deepen partner technical capabilities. Embed with GSI and cloud partner technical teams to enable their AI practices, support troubleshooting, evangelize Anthropic in their developer communities, and serve as an escalation point for complex technical issues.</li>\n</ul>\n<ul>\n<li>Joint Solution Development: Collaborate with partners to identify high value industry-specific GenAI applications, develop joint solutions and codify reference architectures / best practices to accelerate time to deployment</li>\n</ul>\n<ul>\n<li>Customer Deal Support: Intervene directly to unblock strategic customer deals where partners are the primary delivery vehicle, providing deep technical expertise and solution architecture guidance.</li>\n</ul>\n<ul>\n<li>Partner Ecosystem &amp; Events: Represent Anthropic at partner events such as GSI customer workshops, AWS summits, and industry conferences. Lead or support partner-specific developer events, hackathons, and technical enablement sessions, especially for technically native communities.</li>\n</ul>\n<p>Product Feedback: Validate and gather feedback on Anthropic&#39;s products and offerings, especially as they relate to partner use cases and deployment patterns, and deliver this feedback to relevant Anthropic teams to inform product roadmap and partner strategy.</p>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>5+ years of experience in technical customer-facing/partner-facing roles such as Solutions Architect, Sales Engineer, Partner Sales Engineer, Technical Account Manager</li>\n</ul>\n<ul>\n<li>Track record of successfully partnering with GSIs and/or cloud providers to solve complex technical challenges, from initial solution design through customer delivery</li>\n</ul>\n<ul>\n<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>\n</ul>\n<ul>\n<li>Strong presentation &amp; technical communication skills with the ability to translate requirements between technical and business stakeholders</li>\n</ul>\n<ul>\n<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>\n</ul>\n<ul>\n<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>\n</ul>\n<ul>\n<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>A love of teaching, mentoring, and helping others succeed</li>\n</ul>\n<ul>\n<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5d71bfd7-723","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5112486008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["technical customer-facing/partner-facing roles","Solutions Architect","Sales Engineer","Partner Sales Engineer","Technical Account Manager","cloud providers","scalable cloud architectures","enterprise systems","LLM frameworks","machine learning","data science"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:03.769Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"technical customer-facing/partner-facing roles, Solutions Architect, Sales Engineer, Partner Sales Engineer, Technical Account Manager, cloud providers, scalable cloud architectures, enterprise systems, LLM frameworks, machine learning, data science"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2717510f-5f6"},"title":"Transaction Principal","description":"<p>As a Transaction Principal for Europe at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our European data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>\n<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems across Europe , you&#39;ll bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams, and partner closely with our Compute Markets team who own the Europe market strategy and government relationships.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the RFP and commercial sourcing process for European data center deals, managing developer outreach, proposal evaluation, and competitive selection across multiple markets</li>\n<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>\n<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>\n<li>Serve as project manager for cross-functional stakeholder engagement , coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance to ensure alignment prior to lease execution</li>\n<li>Act as the single point of contact for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing status</li>\n<li>Develop and maintain transaction timelines, tracking critical-path items and proactively identifying risks that could impact deal closure</li>\n<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>\n<li>Manage complex digital infrastructure development activities to a construction-ready state, through a developer or directly</li>\n<li>Marry the right projects, capital stacks, and developers at the right stages</li>\n<li>Navigate country-specific permitting, grid connection, and regulatory requirements that vary significantly across European markets</li>\n<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint across the region</li>\n<li>Partner with the Compute Markets Manager to prioritize markets, sites, and counterparties, and feed deal learnings back into Europe market strategy</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>\n<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>\n<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>\n<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>\n<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>\n<li>Have experience working in or across European markets, with knowledge of the regional data center and development landscape , including established FLAP-D hubs and emerging markets like the Nordics and Southern Europe</li>\n<li>Are comfortable operating across multiple countries with different legal frameworks, languages, and business cultures</li>\n<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>\n<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>\n<li>Demonstrate exceptional communication skills and can coordinate effectively across time zones with US-based HQ teams and distributed European partners</li>\n</ul>\n<p>It&#39;s a bonus if you:</p>\n<ul>\n<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>\n<li>Come from the development side of the industry rather than traditional brokerage/leasing , you understand how DC development works and how value is created (yield-on-cost, cap rates, development fees)</li>\n<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>\n<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements across multiple European jurisdictions</li>\n<li>Understand utility coordination, power procurement, or energy considerations in data center transactions, particularly in the European context (fragmented national power markets, grid connection queues, renewable PPAs, sustainability and efficiency regulations)</li>\n<li>Have familiarity with data sovereignty and regulatory considerations that influence European site selection</li>\n<li>Have relationships within the European data center developer, operator, and broker ecosystem</li>\n<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>\n<li>Have experience in high-growth technology companies managing infrastructure expansion</li>\n</ul>\n<p>Annual compensation range for this role is £225,000-£270,000 GBP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2717510f-5f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5170084008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£225,000-£270,000 GBP","x-skills-required":["transaction management","commercial real estate","data center leasing","infrastructure procurement","RFP processes","competitive sourcing","project management","negotiation skills","term sheets","LOIs","commercial agreements","cross-functional stakeholder engagement","due diligence teams","legal counsel","network organization","platform engineers","finance","auxiliary organizations","networks","deployments","government relations","transaction timelines","critical-path items","risks","technical and operational needs","contractual terms","digital infrastructure development","construction-ready state","projects","capital stacks","developers","country-specific permitting","grid connection","regulatory requirements","transaction processes","playbooks","scalable deal execution","Europe market strategy","Compute Markets Manager","market prioritization","site prioritization","counterparty prioritization","deal learnings"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:03.320Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"transaction management, commercial real estate, data center leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, term sheets, LOIs, commercial agreements, cross-functional stakeholder engagement, due diligence teams, legal counsel, network organization, platform engineers, finance, auxiliary organizations, networks, deployments, government relations, transaction timelines, critical-path items, risks, technical and operational needs, contractual terms, digital infrastructure development, construction-ready state, projects, capital stacks, developers, country-specific permitting, grid connection, regulatory requirements, transaction processes, playbooks, scalable deal execution, Europe market strategy, Compute Markets Manager, market prioritization, site prioritization, counterparty prioritization, deal learnings","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":225000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2445da38-a0f"},"title":"Principal Software Engineer - Contractors Payroll","description":"<p>About Gusto</p>\n<p>At Gusto, we&#39;re on a mission to grow the small business economy. We handle the hard stuff , payroll, health insurance, 401(k)s, and HR , so owners can focus on their craft and their customers.</p>\n<p>With teams in Denver, San Francisco, and New York, we support more than 400,000 small businesses nationwide and are building a workplace that reflects the people we serve. All full-time employees receive competitive base pay, benefits, and equity (RSUs) , because everyone who helps build Gusto should share in its success. Offer amounts are determined by role, level, and location. Learn more about our Total Rewards philosophy.</p>\n<p>AI is a fundamental part of how work gets done at Gusto. We expect all team members to actively engage with AI tools relevant to their role and grow their fluency as the technology evolves. AI experience requirements vary by role and will be assessed during the interview process.</p>\n<p>About the Role</p>\n<p>As the Principal Engineer for the Contractors team, you will play a pivotal role in shaping the future of Gusto’s flagship Payroll product,one of the core pillars of our platform. You will design, build, and scale the capabilities that power essential experiences for our customers. Working collaboratively with product managers, designers, and other engineers, you will deliver impactful features that meet customer needs and elevate user experiences.</p>\n<p>As a Gusto Engineer at this level, you’ll guide projects end-to-end,shaping initial feature specifications, driving architectural decisions to bring systems closer to their desired end states, executing on complex initiatives, and maintaining code that powers mission-critical functionality. Beyond technical contributions, you’ll help define and contribute to the broader strategy of how Gusto continues to build and scale its Payroll product.</p>\n<p>If you’re excited about solving complex, high-impact problems and want to contribute to a product that touches the lives of millions, we’d love to have you on board!</p>\n<p>About the Team</p>\n<p>Payroll serves as Gusto&#39;s core product, used by each of our 500,000+ customers and contributing significantly to our annual recurring revenue of over $800,000,000. Although we hold the leading market position for SMBs in the US, the market remains highly fragmented, with an estimated 90% still in need of a superior solution. The Contractors team empowers businesses to onboard and pay contractors in 120+ countries with ease and speed. This includes critical functionalities such as payroll setup, preparation, and submission, historical reporting, time tracking, and shift scheduling.</p>\n<p>As a key member of this team, you’ll have the opportunity to make a profound impact on both the product and the customers who depend on it daily.</p>\n<p>Here’s what you’ll do day-to-day:</p>\n<ul>\n<li>Architect, build, and maintain scalable, secure, and resilient backend systems to support Gusto’s Payroll products.</li>\n</ul>\n<ul>\n<li>Function as a Technical Lead across multiple teams in Pay Group, helping us keep engineers unblocked and deliver high-quality work supporting our long-term goals.</li>\n</ul>\n<ul>\n<li>Help scale one of the largest Ruby/Rails and TypeScript/React applications in the world.</li>\n</ul>\n<ul>\n<li>Collaborate on complex and ambiguous problems with partnerships from Engineering, Product Management, Design, Data Science, Compliance, Operations, and other cross-functional teams.</li>\n</ul>\n<ul>\n<li>Mentor and grow fellow engineers working to create holistic and scalable solutions.</li>\n</ul>\n<ul>\n<li>Drive the product development process from concept to launch, delivering delightful products that make payroll, taxes, and compliance simple and easy.</li>\n</ul>\n<ul>\n<li>Engage in a highly supportive environment working with others to drive productivity and innovation.</li>\n</ul>\n<p>Here’s what we&#39;re looking for:</p>\n<ul>\n<li>15+ years of professional software development experience.</li>\n</ul>\n<ul>\n<li>Experience as a tech lead, overseeing and successfully delivering projects that span multiple teams.</li>\n</ul>\n<ul>\n<li>Enthusiasm for a collaborative, test-driven development environment.</li>\n</ul>\n<ul>\n<li>Proven experience building and maintaining resilient backend systems that support customer-facing products, including optimizing existing systems for performance, reliability, and scalability.</li>\n</ul>\n<ul>\n<li>Ability to produce maintainable, structured, and well-documented code.</li>\n</ul>\n<ul>\n<li>Expertise in developing and maintaining RESTful APIs, GraphQL endpoints, and backend services, ensuring seamless integration with frontend systems and third-party services.</li>\n</ul>\n<ul>\n<li>Demonstrated ability in scaling engineering organizations, with a strong focus on individual and team development and mentorship.</li>\n</ul>\n<ul>\n<li>Experience in highly cross-functional environments working on highly complex products.</li>\n</ul>\n<ul>\n<li>Ability to clearly communicate technical complexity and facilitate informed trade-offs among stakeholders.</li>\n</ul>\n<ul>\n<li>Experience using AI tools to build, test, and iterate on products quickly.</li>\n</ul>\n<ul>\n<li>Understanding of how to evaluate AI-driven outputs using clear success criteria.</li>\n</ul>\n<ul>\n<li>A commitment to staying current on emerging backend technologies and AI frameworks and patterns, regularly experimenting with new approaches.</li>\n</ul>\n<ul>\n<li>Willingness to contribute to shared tools or templates that enhance the speed and safety of AI experimentation.</li>\n</ul>\n<p>Our cash compensation amount for this role is targeted at $251,000-$309,000 /yr for New York. Final offer amounts are determined by multiple factors, including candidate experience and expertise, and may vary from the amounts listed above.</p>\n<p>Gusto has physical office spaces in Denver, San Francisco, and New York City. Employees who are based in those locations will be expected to work from the office on designated days approximately 2-3 days per week (or more depending on role). The same office expectations apply to all Symmetry roles, Gusto&#39;s subsidiary, whose physical office is in Scottsdale. Note: The San Francisco office expectations encompass both the San Francisco and San Jose metro areas. When approved to work from a location other than a Gusto office, a secure, reliable, and consistent internet connection is required. This includes non-office days for hybrid employees.</p>\n<p>Our customers come from all walks of life and so do we. We hire great people from a wide variety of backgrounds, not just because it&#39;s the right thing to do, but because it makes our company stronger. If you share our values and our enthusiasm for small businesses, you will find a home at Gusto.</p>\n<p>Gusto is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic.</p>\n<p>Gusto considers qualified applicants with criminal histories, consistent with applicable federal, state and local law. Gusto is also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures.</p>\n<p>We want to see our candidates perform to the best of their ability. If you require a medical or religious accommodation at any time throughout your candidate journey, please fill out this form and a member of our team will get in touch with you.</p>\n<p>Gusto takes security and protection of your personal information very seriously. Please review our Fraudulent Activity Disclaimer. Personal information collected and processed as part of your Gusto application will be subject to Gusto&#39;s Applicant Privacy Notice.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2445da38-a0f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gusto","sameAs":"https://www.gusto.com/","logo":"https://logos.yubhub.co/gusto.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gusto/jobs/6447954","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$251,000-$309,000 /yr","x-skills-required":["Ruby","Rails","TypeScript","React","RESTful APIs","GraphQL","backend services","scalable systems","secure systems","resilient systems","collaborative development environment","test-driven development","backend system maintenance","API development","data science","compliance","operations","cross-functional teams","engineering organization development","team development","mentorship","AI tools","AI-driven outputs","emerging backend technologies","AI frameworks","patterns"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:59.500Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ruby, Rails, TypeScript, React, RESTful APIs, GraphQL, backend services, scalable systems, secure systems, resilient systems, collaborative development environment, test-driven development, backend system maintenance, API development, data science, compliance, operations, cross-functional teams, engineering organization development, team development, mentorship, AI tools, AI-driven outputs, emerging backend technologies, AI frameworks, patterns","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":251000,"maxValue":309000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f2bc1be2-478"},"title":"Senior Technical Solutions Engineer, Platform","description":"<p>As a Senior Technical Solutions Engineer, you will provide technical support for Databricks Platform related issues and resolve any challenges involving the Databricks unified analytics platform.</p>\n<p>You will assist customers in their Databricks journey and provide them with the guidance and knowledge that they need to accomplish value and achieve their strategic goals using our products.</p>\n<p>They will look to you for answers to everything from basic technical questions to complex architectural scenarios spanning across the entire Big Data ecosystem.</p>\n<p>You will report to the Senior Manager of Technical Solutions.</p>\n<p>Key responsibilities include: Troubleshooting and resolving complex customer issues related to Databricks platform Providing best practices support for custom-built solutions developed by Databricks customers Delivering suggestions for improving performance in customer-specific environments Assisting with issues around third-party integrations with Databricks environment Demonstrating and coordinating with engineering and escalation teams to achieve resolution of customer issues and requests Participating in the creation and maintenance of company documentation and knowledge articles Being a true proponent of customer advocacy Strengthening your AWS/Azure and Databricks platform expertise through learning and internal training programs Participating in weekend and weekday on call rotation</p>\n<p>Requirements include: Minimum 4 years experience designing, building, testing, and maintaining Python/Java/Scala based applications Expert level knowledge in python is desired Solid experience with SQL-based database is required Linux/Unix administration skills Hands-on experience with AWS, Azure or GCP Candidate must possess excellent English written and oral communication skills Experience with &quot;Distributed Big Data Computing&quot; environment Technical degree or the equivalent experience</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f2bc1be2-478","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7902994002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","SQL","Linux/Unix administration","AWS","Azure","GCP","Distributed Big Data Computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:52.913Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Rica"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, SQL, Linux/Unix administration, AWS, Azure, GCP, Distributed Big Data Computing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04c1ff49-2d1"},"title":"Data Platform Solutions Architect (Professional Services)","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>[Preferred] Databricks Certification but not essential</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04c1ff49-2d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8396801002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","technical project delivery","documentation and white-boarding skills"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:58:52.546Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_33eb12c1-537"},"title":"Solutions Architect","description":"<p>Join our team as a Solutions Architect and play a crucial role in helping our customers solve their complex data challenges. As a key member of our Field Engineering team, you will work closely with customers to understand their needs and develop customized solutions using our Data Intelligence Platform.</p>\n<p>We&#39;re looking for someone with a strong technical background in big data analytics, who can operate as a trusted advisor to our customers. You will be responsible for developing successful relationships with clients, providing technical and business value, and scaling best practices in your field.</p>\n<p>As a Solutions Architect, you will:</p>\n<ul>\n<li>Form successful relationships with clients throughout your assigned territory</li>\n<li>Operate as an expert in big data analytics to excite customers about Databricks</li>\n<li>Develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation</li>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist</li>\n</ul>\n<p>To succeed in this role, you will need:</p>\n<ul>\n<li>Experience with coding in a core programming language (i.e., Python, Java, Scala)</li>\n<li>A base level in Spark</li>\n<li>A builder mindset with a passion for quick prototyping and experience in vibe coding</li>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s)</li>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences</li>\n<li>Joy in drilling deeper on tough technical questions and solution architecture while always keeping the big picture in mind</li>\n</ul>\n<p>Fluency in German is a strong advantage, but not required.</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit our website.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_33eb12c1-537","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8500326002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Spark","Big Data Analytics","Cloud Computing"],"x-skills-preferred":["Machine Learning","Data Science","Cloud Architecture"],"datePosted":"2026-04-18T15:58:49.054Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Spark, Big Data Analytics, Cloud Computing, Machine Learning, Data Science, Cloud Architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8c314c6f-013"},"title":"Corporate and Securities Counsel","description":"<p>We&#39;re looking for a Corporate and Securities Counsel to join our team. As a member of our legal department, you will provide support for and advice on general corporate governance and securities law matters and projects. This includes ensuring compliance with public company reporting and disclosure requirements, supporting the annual meeting of stockholders, and supporting the administration of key corporate policies and programs.</p>\n<p>You will also provide support for the Board of Directors and executive leadership team on governance- and shareholder-related matters, including preparation and review of agenda and materials for meetings of the Board and its committees. Additionally, you will support corporate transactions, including drafting and negotiation of NDAs, letters of intent, definitive agreements, and other ancillary documents in mergers and acquisitions and financial transactions.</p>\n<p>To be successful in this role, you will need to have a Juris Doctor (J.D.) from an ABA accredited institution, a current, active license to practice law in the United States, and prior relevant experience in corporate governance and securities law. You should also have excellent judgment, attention to detail, communication (verbal and written), organizational and interpersonal skills, and the ability to work collaboratively across multiple groups and functions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8c314c6f-013","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8464456002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$144,900-$227,700 USD","x-skills-required":["Juris Doctor (J.D.) from an ABA accredited institution","Current, active license to practice law in the United States","Prior relevant experience in corporate governance and securities law","Excellent judgment, attention to detail, communication (verbal and written), organizational and interpersonal skills","Ability to work collaboratively across multiple groups and functions"],"x-skills-preferred":["Substantive corporate transactional experience, including mergers and acquisitions or strategic financing","Ability to leverage AI and emerging technologies to improve operational efficiency, analyze data, and support scalable business solutions"],"datePosted":"2026-04-18T15:58:48.693Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bethesda, Maryland, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"Juris Doctor (J.D.) from an ABA accredited institution, Current, active license to practice law in the United States, Prior relevant experience in corporate governance and securities law, Excellent judgment, attention to detail, communication (verbal and written), organizational and interpersonal skills, Ability to work collaboratively across multiple groups and functions, Substantive corporate transactional experience, including mergers and acquisitions or strategic financing, Ability to leverage AI and emerging technologies to improve operational efficiency, analyze data, and support scalable business solutions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":144900,"maxValue":227700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cec208e5-844"},"title":"Solutions Architect, Applied AI (Digital Native Business)","description":"<p>As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping large enterprises understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack.</p>\n<p>You&#39;ll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex business challenges while maintaining our high standards for safety and reliability.</p>\n<p>Working closely with our Sales, Product, and Engineering teams, you&#39;ll guide customers from initial technical discovery through successful deployment. You&#39;ll leverage your expertise to help customers understand Claude&#39;s capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.</p>\n<p>Responsibilities:</p>\n<p>Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation</p>\n<p>Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams &amp; stakeholders to drive customer success</p>\n<p>Support customers building with both the Claude API and Claude for Work</p>\n<p>Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering &amp; development teams up to business value focused conversations with executives</p>\n<p>Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack</p>\n<p>Help customers develop evaluation frameworks to measure Claude&#39;s performance for their specific use cases</p>\n<p>Identify common integration patterns and contribute insights back to our Product and Engineering teams</p>\n<p>Travel occasionally to customer sites for workshops, technical deep dives, and relationship building</p>\n<p>Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns</p>\n<p>You may be a good fit if you have:</p>\n<p>5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager</p>\n<p>Experience working with enterprise customers, navigating complex buying cycles involving multiple stakeholders</p>\n<p>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</p>\n<p>Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders</p>\n<p>Experience designing scalable cloud architectures and integrating with enterprise systems</p>\n<p>Comfortable with python</p>\n<p>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</p>\n<p>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</p>\n<p>A love of teaching, mentoring, and helping others succeed</p>\n<p>Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</p>\n<p>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</p>\n<p>Please note this role requires 3 days in office per week.</p>\n<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $240,000-$315,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cec208e5-844","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5065835008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$240,000-$315,000 USD","x-skills-required":["Technical customer-facing roles","Enterprise customers","Complex buying cycles","Technical communication skills","Scalable cloud architectures","Python","LLM frameworks and tools","Machine learning or data science"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:42.551Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical customer-facing roles, Enterprise customers, Complex buying cycles, Technical communication skills, Scalable cloud architectures, Python, LLM frameworks and tools, Machine learning or data science","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba66dcb1-8d9"},"title":"Research Scientist, AI Controls and Monitoring","description":"<p>We&#39;re seeking a Research Scientist to join our team focused on AI Controls and Monitoring. As a key member of our team, you will design methods, systems, and experiments to ensure that advanced AI models and agents remain aligned with intended goals, even in high-stakes or adversarial environments.</p>\n<p>Your responsibilities will include developing monitoring techniques and observability methods, researching mechanisms for layered control, and designing red-team simulations to probe weaknesses in oversight and control mechanisms.</p>\n<p>To succeed in this role, you&#39;ll need a strong background in machine learning, particularly in generative AI, and at least three years of experience addressing sophisticated ML problems. You should be comfortable designing control and monitoring experiments for AI systems, building prototype systems, and quickly turning new ideas from the research literature into working prototypes.</p>\n<p>In addition to your technical expertise, you&#39;ll need strong written and verbal communication skills to operate in a cross-functional team.</p>\n<p>This role offers a competitive salary range of $216,000-$270,000 USD, depending on location and experience, as well as equity-based compensation and benefits, including comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba66dcb1-8d9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4675694005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Machine Learning","Generative AI","AI Control Protocols","AI Risk Evaluations","Runtime Monitoring","Anomaly Detection","Observability"],"x-skills-preferred":["Post-Training and RL Techniques","Scalable Oversight","Interpretability","Debate"],"datePosted":"2026-04-18T15:58:38.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Generative AI, AI Control Protocols, AI Risk Evaluations, Runtime Monitoring, Anomaly Detection, Observability, Post-Training and RL Techniques, Scalable Oversight, Interpretability, Debate","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1fa27595-f10"},"title":"Sr Software Engineer-Networking","description":"<p>We are seeking experienced Senior Software Engineers with large-scale distributed system experience to join the Networking Infrastructure team. You will design and automate the networking foundations for large-scale compute clusters across all the major cloud providers, connecting millions of VMs running on Databricks. You will lead the design and development of secure and scalable connectivity that powers all the data / AI workloads. You will work closely with cross-functional teams, including product management, operations, and other engineering teams, to ensure the delivery of robust, scalable, and efficient networking systems.</p>\n<p>This is an excellent opportunity for a hands-on leader who thrives in a fast-paced environment and is passionate about solving novel multicloud and distributed systems challenges.</p>\n<p>The ideal candidate will have 5+ years of production-level experience in one of: Python, Java, Scala, C++, or a similar language, and 4+ years of experience developing large-scale distributed systems from scratch. Experience working on a SaaS platform or with Service-Oriented Architectures is also required. Extensive experience working on large-scale compute clusters, network connectivity, and automation is necessary.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1fa27595-f10","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8211452002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Python","Java","Scala","C++","large-scale distributed systems","SaaS platform","Service-Oriented Architectures","network connectivity","automation"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:34.971Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, large-scale distributed systems, SaaS platform, Service-Oriented Architectures, network connectivity, automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5b244f27-9fd"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>\n<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>\n<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>\n<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>\n<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>\n<p>Travel to customers 20% of the time.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5b244f27-9fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461258002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:34.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Raleigh, North Carolina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b02d4cf4-599"},"title":"Director, Business Systems","description":"<p>About the Role</p>\n<p>The Director of Business Systems will be responsible for building real depth across Finance, GTM, Operations, and People systems, connecting these capabilities into a powerful, AI-integrated backbone.</p>\n<p>You are a builder-first leader who will blend enterprise architecture with cutting-edge AI automation. Your goal is to eliminate fragmentation and enable every function to move faster by deploying internal AI agents that reduce manual work and surface actionable insights.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Enterprise Application Strategy: Own the full lifecycle of our application ecosystem, including ERP (Finance), CRM (GTM), HRIS (People), and specialized AI tools. Ensure these systems converge into a secure, scalable backbone.</li>\n</ul>\n<ul>\n<li>AI Agent Deployment: Identify opportunities to replace manual workflows with AI/LLM-powered agents. Build and manage &#39;internal agents&#39; that automate forecasting, revenue capture, and employee self-service.</li>\n</ul>\n<ul>\n<li>Cross-Functional Partnership: Serve as the primary technology partner to the leadership team. Align system roadmaps with departmental goals to ensure seamless data flow from production to the back office.</li>\n</ul>\n<ul>\n<li>Operational Excellence: Mature our core systems by establishing disciplined change management, clear data governance, and measurable SLAs. Stabilize environments to support audit and regulatory requirements.</li>\n</ul>\n<ul>\n<li>Team Leadership: Lead a high-impact team that fosters a culture of curiosity, speed, and user-centricity.</li>\n</ul>\n<p>Ideally, you have:</p>\n<ul>\n<li>10+ years of experience leading business systems or enterprise applications in a fast-paced, high-growth environment.</li>\n</ul>\n<ul>\n<li>Technical Depth: Strong functional understanding of Salesforce (or equivalent CRM), NetSuite (or equivalent ERP), and Workday (or equivalent HRIS).</li>\n</ul>\n<ul>\n<li>AI/Automation Mindset: Proven track record of designing and deploying AI/LLM-powered workflows or agentic systems to improve business efficiency.</li>\n</ul>\n<ul>\n<li>Architecture Skills: Ability to design scalable data flows and APIs that link disparate SaaS tools into a cohesive ecosystem.</li>\n</ul>\n<ul>\n<li>Stakeholder Mastery: Experience influencing C-level executives and translating complex technical needs into business outcomes.</li>\n</ul>\n<ul>\n<li>Skilled in Python or SQL to personally audit or prototype automation logic.</li>\n</ul>\n<ul>\n<li>Extensive experience and In-depth knowledge of the functionality of the ERP modules with emphasis on Sales – Order to Cash &amp; Sales Audit, Purchasing – Direct &amp; Indirect and Finance - General Ledger, AP &amp; AR, Procure to Pay, Sales Audit, Fixed Assets, International Consolidations &amp; Reporting.</li>\n</ul>\n<ul>\n<li>Experience with GTM systems such as, HubSpot, Outreach, Clari, CPQ (e.g., Salesforce CPQ), and similar tools.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the location of San Francisco is: $235,200-$294,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b02d4cf4-599","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4657910005","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$235,200-$294,000 USD","x-skills-required":["Salesforce","NetSuite","Workday","Python","SQL","AI/LLM-powered workflows","Scalable data flows","APIs","Disparate SaaS tools","C-level executives","Business outcomes","ERP modules","GTM systems","HubSpot","Outreach","Clari","CPQ"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:33.977Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Salesforce, NetSuite, Workday, Python, SQL, AI/LLM-powered workflows, Scalable data flows, APIs, Disparate SaaS tools, C-level executives, Business outcomes, ERP modules, GTM systems, HubSpot, Outreach, Clari, CPQ","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":235200,"maxValue":294000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_63a79841-36e"},"title":"Solutions Architect (Vietnam)","description":"<p>At Databricks, we&#39;re seeking a Solutions Architect to join our Field Engineering team in Vietnam. As a key member of our team, you will work closely with customers to understand their complex data challenges and provide technical expertise to demonstrate how our Data Intelligence Platform can help them solve these issues.</p>\n<p>You will form successful relationships with clients throughout Vietnam, providing technical and business value to Databricks customers in collaboration with Account Executives. You will operate as an expert in big data analytics, developing into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Developing customer relationships and building internal partnerships with account executives and teams</li>\n<li>Engaging customers in technical sales, challenging their questions, guiding clear outcomes, and communicating technical and value propositions</li>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark</li>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s)</li>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth</li>\n<li>Proficiency in the Vietnamese language is required as this role serves clients based in Vietnam and involves direct customer communications in the Vietnamese language</li>\n</ul>\n<p>In return, you will have the opportunity to grow your knowledge and expertise to the level of a technical and/or industry specialist, and contribute to the success of our customers and the growth of our organization.</p>\n<p>If you&#39;re passionate about working with data and AI, and want to make a real impact, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_63a79841-36e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8472732002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Big Data Analytics","Spark","Cloud Computing","Data Science","Machine Learning"],"x-skills-preferred":["Data Engineering","Data Architecture","Cloud Security","DevOps"],"datePosted":"2026-04-18T15:58:31.724Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Big Data Analytics, Spark, Cloud Computing, Data Science, Machine Learning, Data Engineering, Data Architecture, Cloud Security, DevOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4daeb1d2-f04"},"title":"Senior Software Engineer - Fullstack","description":"<p>We are seeking a senior software engineer to join our team in Vancouver. As a fullstack software engineer, you will work with your team and product management to make insights from data simple. You&#39;ll set the foundation for how we build robust, scalable, and delightful products.</p>\n<p>Our customers increasingly use Databricks to analyze petabyte-scale logs in real time. This creates new challenges across the entire data processing pipeline, including ingestion, indexing, processing, and the user experience itself. Our customers are also using Databricks to launch AI/BI, which is redefining Business Intelligence for the AI age. We have several open roles across the teams below:</p>\n<ul>\n<li>Log Analytics: Our customers increasingly use Databricks to analyze petabyte-scale logs in real time.</li>\n<li>AI/BI: AI/BI is redefining Business Intelligence for the AI age.</li>\n<li>Unity Catalog Business Semantics: Context is everything for AI. For enterprise data, that context needs to be governed and managed, which is what Unity Catalog Business Semantics offers.</li>\n<li>Databricks Apps: Databricks Apps is one of the fastest growing products at Databricks, used by more than 2,500 customers who have created more than 20,000 apps.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years of experience with HTML, CSS, and JavaScript.</li>\n<li>Passion for user experience and design and a deep understanding of front-end architecture.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Motivated by delivering customer value.</li>\n<li>Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember).</li>\n<li>5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go).</li>\n<li>Good knowledge of SQL.</li>\n<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes.</li>\n<li>Experience developing large-scale distributed systems.</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. Canada Pay Range $146,200-$201,100 CAD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4daeb1d2-f04","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8099342002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$146,200-$201,100 CAD","x-skills-required":["HTML","CSS","JavaScript","Node.js","Java","Python","Scala","C#","C++","Go","SQL","AWS","Azure","GCP","Docker","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:30.534Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":146200,"maxValue":201100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5fa8591-cb8"},"title":"Solutions Architect: Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5fa8591-cb8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8353757002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:24.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_43334479-97e"},"title":"Sr Analytics Engineer - GTM Strategy and Operations","description":"<p>As a Senior Analytics Engineer, you will be a critical partner to the Global GTM Strategy &amp; Operations teams, providing the data, AI-driven insights, and infrastructure needed to drive efficiency and effectiveness across the organization.</p>\n<p>You will design, build, and maintain scalable data models, curated reporting tables, forecasts, and dashboards that support everyone from senior executives to individual contributors, empowering them to make informed decisions and spend more time driving customer outcomes.</p>\n<p>Working closely with cross-functional stakeholders,including Sales, Finance, Marketing, and other data teams,you will tackle complex data challenges by leveraging structured data, building AI-powered querying assistants, and using tools like Databricks Genie to improve data accessibility, streamline insights, and deliver actionable, reliable solutions across the business.</p>\n<p>You will also play a key role in advancing our newly created AI initiatives and semantic data curation efforts, helping to establish a strong foundation for advanced analytics, automation, and scalable business intelligence.</p>\n<p>The Impact You Will Have:</p>\n<ul>\n<li>Build: You will design and develop analytic tools, including a semantic layer for AI use cases, scalable data models, curated tables, and insightful analyses that empower thousands of field employees and leaders worldwide.</li>\n</ul>\n<ul>\n<li>Architect: You will both manage the requirements gathering and lead execution of strategic analytic projects.</li>\n</ul>\n<ul>\n<li>Scale: You will build and manage relationships with stakeholders across the company but primarily with the GTM strategy and operations team</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>You have 4+ years of experience working as an Analyst / Data Engineer / Analytics Engineer with B2B sales, marketing, or finance data (GTM experience highly preferred).</li>\n</ul>\n<ul>\n<li>You are data-savvy with 3+ years of SQL and 2+ years of Python experience. Familiarity with data ecosystems and BI tools (e.g., Databricks, PowerBI) is required.</li>\n</ul>\n<ul>\n<li>You have built for scale. You have experience building scalable and productionizable data models with best practices in mind.</li>\n</ul>\n<ul>\n<li>You integrate AI into your daily workflow. You have hands-on experience using large language model tools (such as Claude or similar) to accelerate analytics work , from drafting and debugging code to synthesizing requirements and generating documentation.</li>\n</ul>\n<ul>\n<li>You&#39;re comfortable evaluating AI-generated outputs critically and iterating quickly.</li>\n</ul>\n<ul>\n<li>You are passionate about applying AI to transform GTM teams. You bring experience in delivering AI-driven solutions and have the ability to design innovative use cases as well as structure data models and tables that are optimized for AI readiness.</li>\n</ul>\n<ul>\n<li>You excel in partnering with the business, understanding the impact of your work on GTM, and creating innovative solutions.</li>\n</ul>\n<ul>\n<li>You have a track record of cross-functional collaboration and strong stakeholder relationships.</li>\n</ul>\n<ul>\n<li>You excel in a collaborative environment. You translate team member needs into clear tasks and deliverables for contributors.</li>\n</ul>\n<ul>\n<li>You work through dependencies, bottlenecks, and tradeoffs with ease.</li>\n</ul>\n<ul>\n<li>You have a service-oriented mindset.</li>\n</ul>\n<ul>\n<li>You are curious, creative, and kind.</li>\n</ul>\n<p>Pay Range Transparency:</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $133,000-$182,950 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_43334479-97e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8479036002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$133,000-$182,950 USD","x-skills-required":["SQL","Python","Databricks","PowerBI","Data Engineering","Analytics Engineering","AI","Machine Learning"],"x-skills-preferred":["Large Language Model Tools","Claude","Semantic Data Curation","Advanced Analytics","Automation","Scalable Business Intelligence"],"datePosted":"2026-04-18T15:58:18.439Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Databricks, PowerBI, Data Engineering, Analytics Engineering, AI, Machine Learning, Large Language Model Tools, Claude, Semantic Data Curation, Advanced Analytics, Automation, Scalable Business Intelligence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":133000,"maxValue":182950,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bc116cd3-62a"},"title":"Member of Technical Staff – Web Foundations","description":"<p><strong>About the Role:</strong></p>\n<p>We&#39;re looking for exceptional Fullstack / Web Engineers to work across the stack but have a passion for frontend development and a keen eye for design.</p>\n<p>You&#39;ll architect and optimise user-facing features that power real-time conversations for millions worldwide. (pipe) Dive into cutting-edge technologies and scalable backend systems, collaborating with top-tier talent to push the boundaries of web performance and innovation.</p>\n<p>You have the ability to thrive in a fast-paced environment, where you proactively tackle high-impact challenges that shape the future of social media,perfect for engineers passionate about crafting seamless, responsive experiences that drive global engagement and redefine digital interaction.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Own and drive features from inception and design to implementation and launch, being the web expert on your team.</li>\n</ul>\n<ul>\n<li>Build and maintain high-quality, performant products and features, leveraging the most modern and cutting-edge web standards, technologies, frameworks, and AI tooling.</li>\n</ul>\n<ul>\n<li>Responsible for fullstack features, including user dashboards, personalised experiences, content delivery, interactive tools, assessments, and real-time analytics</li>\n</ul>\n<ul>\n<li>Lead architecture, scalability, and reliability decisions for high-concurrency, low-latency systems.</li>\n</ul>\n<ul>\n<li>Uphold engineering excellence via testing, monitoring, deployment, and secure data handling.</li>\n</ul>\n<ul>\n<li>Drive technical/product decisions with teams and deploy global features to maximise user value.</li>\n</ul>\n<p><strong>Basic Qualifications:</strong></p>\n<ul>\n<li>2+ years of web development experience.</li>\n</ul>\n<ul>\n<li>Expert in TypeScript, Node.js, and modern web frameworks (e.g., React).</li>\n</ul>\n<ul>\n<li>Expert in modern CSS/SASS</li>\n</ul>\n<ul>\n<li>Experience in high-quality UI and UX design</li>\n</ul>\n<ul>\n<li>Proven track record of optimising applications for performance, security, and offline functionality.</li>\n</ul>\n<p><strong>Preferred Skills and Experience:</strong></p>\n<ul>\n<li>5+ years of experience in a web frontend role, working on a large scale consumer app.</li>\n</ul>\n<ul>\n<li>Experience with backend development, proficiency in one or more of the following: Rust, Go, Java, Python, Scala.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bc116cd3-62a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5063930007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["TypeScript","Node.js","React","modern CSS/SASS","UI and UX design"],"x-skills-preferred":["Rust","Go","Java","Python","Scala"],"datePosted":"2026-04-18T15:58:16.842Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Node.js, React, modern CSS/SASS, UI and UX design, Rust, Go, Java, Python, Scala","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_12a4cdb3-95b"},"title":"Senior Marketing Operations Manager, B2B Sales","description":"<p>We&#39;re looking for a Senior Marketing Operations Manager to architect and optimize our B2B sales-led and channel-driven GTM engine. This role will define and maintain the systems, processes, and operational rigor that align Marketing, SDR, Sales, and Partner teams.</p>\n<p>The ideal candidate will have hands-on experience administering Marketo, Salesforce, and LeanData, and deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own and evolve the GTM systems architecture, ensuring Salesforce, Marketo, LeanData, ZoomInfo, Qualified, Outreach, and Clay.io work together as a best-in-class, integrated ecosystem.</li>\n<li>Lead the design, governance, and optimization of data orchestration workflows using LeanData, including routing, prioritization, handoffs, and conversion logic across Marketing, SDR, and Sales teams.</li>\n<li>Design and execute a future-state operational roadmap focused on scaling B2B demand generation, ABM, and partner-led growth through automation, improved data flows, and AI-powered insights.</li>\n<li>Build automated lifecycle processes for lead scoring, enrichment, qualification, and cross-functional handoffs using LeanData, Zapier, Clay, Segment, and AI agents.</li>\n<li>Enhance sales productivity by implementing agentic workflows (e.g., automated follow-ups, enrichment workflows, SDR assistance tools) in Outreach and Salesforce.</li>\n<li>Manage data governance across Salesforce, Marketo, and Segment, ensuring reliable attribution, reporting, and pipeline visibility.</li>\n<li>Create AI-informed dashboards and reporting on pipeline performance, lead velocity, conversion, campaign effectiveness, and partner impact.</li>\n<li>Partner with RevOps, Sales Systems, and Engineering to operationalize cross-functional processes that reduce manual work and improve efficiency.</li>\n<li>Support partner/VAR motions through automated attribution, routing rules, partner engagement workflows, and integrated co-marketing processes.</li>\n<li>Continuously evaluate new tools, AI capabilities, and operational improvements that elevate our GTM infrastructure.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years in Marketing Operations or Revenue Operations supporting B2B sales-led funnels.</li>\n<li>Hands-on experience administering Marketo, Salesforce, and LeanData.</li>\n<li>Deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</li>\n<li>Proven ability to design automated workflows, operational processes, and scalable cross-system integrations.</li>\n<li>Experience using AI-driven tools or agentic workflows to automate SDR tasks, enrich lead data, or accelerate GTM execution.</li>\n<li>Strong analytical, system design, and documentation skills; able to translate business needs into scalable technical workflows.</li>\n<li>Experience collaborating with Sales, SDR, RevOps, and System/Engineering teams.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience in FinTech or enterprise B2B SaaS environments.</li>\n<li>Familiarity with conversational marketing/ABM platforms like Qualified.</li>\n<li>Experience with tools like LeanData and Outreach in support of lead routing and SDR/BDR workflows.</li>\n<li>Experience with paid funnel operations is a plus (Google Ads, LinkedIn Ads, etc.).</li>\n<li>Understanding of partner/VAR operational workflows and partner attribution logic.</li>\n<li>Ability to design scalable integrations using tools like Segment, Zapier, or Workato-style platforms.</li>\n</ul>\n<p>Compensation:</p>\n<p>The expected salary range for this role is $134,696 - $168,370.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_12a4cdb3-95b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8380680002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$134,696 - $168,370","x-skills-required":["Marketo","Salesforce","LeanData","Lead routing","Lead-to-account matching","Data orchestration workflows","AI-driven tools","Agentic workflows","Automation","Improved data flows","AI-powered insights","Cross-system integrations","Strong analytical skills","System design","Documentation skills"],"x-skills-preferred":["FinTech","Enterprise B2B SaaS","Conversational marketing/ABM platforms","Paid funnel operations","Partner/VAR operational workflows","Scalable integrations"],"datePosted":"2026-04-18T15:58:13.336Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Finance","skills":"Marketo, Salesforce, LeanData, Lead routing, Lead-to-account matching, Data orchestration workflows, AI-driven tools, Agentic workflows, Automation, Improved data flows, AI-powered insights, Cross-system integrations, Strong analytical skills, System design, Documentation skills, FinTech, Enterprise B2B SaaS, Conversational marketing/ABM platforms, Paid funnel operations, Partner/VAR operational workflows, Scalable integrations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":134696,"maxValue":168370,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c2e7ae82-8ff"},"title":"Sr. Delivery Solutions Architect","description":"<p>As a Senior Delivery Solutions Architect at Databricks, you will play a crucial role in empowering customers to solve the world&#39;s toughest data problems using the Databricks Data Intelligence Platform. You will collaborate with sales and field engineering teams to accelerate the adoption and growth of the Databricks platform in your customers. Your primary goal will be to ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected.</p>\n<p>This is a hybrid technical and commercial role, requiring you to drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, and creating and driving plans and strategies for Databricks colleagues to build upon. You will also be responsible for becoming the post-sale technical lead across all Databricks products, using your skills and technical credibility to engage and communicate at all levels with an organisation.</p>\n<p>Your impact will be significant, as you will be engaged with Solutions Architects to understand the full use case demand plan for prioritised customers, lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts, and be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engaging with Solutions Architects to understand the full use case demand plan for prioritised customers</li>\n<li>Leading the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>\n<li>Being the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders</li>\n<li>Creating, owning, and executing a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>\n<li>Navigating Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>\n<li>Developing an execution plan that covers all activities of all customer-facing technical roles and teams to cover main use cases moving from &#39;win&#39; to production, enablement/user growth plan, product adoption, organic needs for current investment, executive and operational governance, and providing internal and external updates</li>\n</ul>\n<p>To succeed in this role, you will need to have 10+ years of experience in technical project or program delivery within the domain of Data and AI, with a strong understanding of solution architecture related distributed data systems, programming experience in Python, SQL, or Scala, and experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c2e7ae82-8ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8342273002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Scala","Solution architecture","Distributed data systems","Customer-facing pre-sales","Technical architecture","Customer success","Consulting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:05.768Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Scala, Solution architecture, Distributed data systems, Customer-facing pre-sales, Technical architecture, Customer success, Consulting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eed925a1-b05"},"title":"Sr. Staff/ Staff Backline Technical Solution engineer","description":"<p>At Databricks, we enable data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a Backline Technical Solutions Engineer, you will help our customers succeed with the Databricks platform by resolving complex technical customer escalations and working closely with the frontline support team.</p>\n<p>Your responsibilities will include: Troubleshooting and resolving complex customer issues related to the Databricks Platform by analysing core component metrics and logs. Providing suggestions and best practice guidance for improving performance in customer-specific environments and providing product improvement feedback. Helping the support team with detailed troubleshooting guides and runbooks. Contributing to automation and tooling programs to make daily troubleshooting efficient. Partnering with the engineering team and spreading awareness of upcoming features and releases. Identifying and contributing supportability features back into the product. Demonstrating ownership and coordinating with engineering and escalation teams to achieve resolution of customer issues and requests. Participating in weekend and weekday on-call rotation.</p>\n<p>We look for candidates with 12+ years of industry experience, expertise in scripting using Python or Shell, and comfort with black box troubleshooting. Experience with supporting Java, Scala or Python based applications, distributed big data computing environments, SQL-based database systems, Linux and network troubleshooting, and cloud services such as AWS, Azure or GCP is also required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eed925a1-b05","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8375176002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","Python","Shell","Distributed Big Data Computing","SQL-based Database Systems","Linux","Network Troubleshooting","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:03.133Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Python, Shell, Distributed Big Data Computing, SQL-based Database Systems, Linux, Network Troubleshooting, AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04ee7215-acf"},"title":"Sr. Manager, Engineering - Model Serving","description":"<p>At Databricks, we enable data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our Model Serving product provides enterprises with a unified, scalable, and governed platform to deploy and manage AI/ML models. As a Senior Engineering Manager, you will lead the team owning both the product experience and the foundational infrastructure of Model Serving, shaping customer-facing capabilities while designing for scalability, extensibility, and performance across both CPU and GPU inference. The impact you will have includes leading, mentoring, and growing a high-performing engineering team, defining and owning the product and technical roadmap for Model Serving, collaborating closely with product, research, platform, and infrastructure teams, and ensuring Model Serving meets stringent SLAs, SLOs, and performance and reliability goals.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading, mentoring, and growing a high-performing engineering team responsible for both the customer-facing Model Serving product and its foundational infrastructure.</li>\n<li>Defining and owning the product and technical roadmap for Model Serving, balancing customer experience, functionality, and foundational investments across deployment, inference, monitoring, and scaling.</li>\n<li>Collaborating closely with product, research, platform, and infrastructure teams to drive end-to-end delivery from ideation and prioritization to launch and operation.</li>\n<li>Ensuring Model Serving meets stringent SLAs, SLOs, and performance and reliability goals, continuously improving operational efficiency and customer experience.</li>\n<li>Driving architectural decisions and product design around latency, throughput, autoscaling, GPU/CPU placement, and cost optimization.</li>\n<li>Advocating for customer needs through direct engagement, ensuring engineering decisions translate to clear product impact.</li>\n<li>Promoting best practices in code quality, testing, observability, and operational readiness.</li>\n<li>Fostering a culture of excellence, inclusion, and continuous improvement across the team.</li>\n<li>Partnering with recruiting to attract, hire, and develop top-tier engineering talent.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04ee7215-acf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8211957002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$217,000-$312,200 USD","x-skills-required":["technical leadership","large-scale distributed systems","real-time serving systems","architectural design","operational excellence","production systems","SLAs","SLOs","GPU performance optimization","concurrency","caching","scalability concepts"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:02.797Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"technical leadership, large-scale distributed systems, real-time serving systems, architectural design, operational excellence, production systems, SLAs, SLOs, GPU performance optimization, concurrency, caching, scalability concepts","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":217000,"maxValue":312200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b79d9627-55a"},"title":"Research Engineer, Infrastructure, Training Systems","description":"<p>We&#39;re seeking an infrastructure research engineer to design and build scalable, efficient training systems for large models. As a key member of our team, you&#39;ll take ownership of the training stack end-to-end, ensuring every GPU cycle drives scientific progress. Your goal is to make experimentation and training at Thinking Machines fast and reliable, allowing our research teams to focus on science, not system bottlenecks.</p>\n<p>Key responsibilities include designing, implementing, and optimizing distributed training systems, developing high-performance optimizations, and establishing standards for reliability, maintainability, and security. You&#39;ll collaborate with researchers and engineers to build scalable infrastructure and publish learnings through internal documentation, open-source libraries, or technical reports.</p>\n<p>We&#39;re looking for someone who blends deep systems and performance expertise with a curiosity for machine learning at scale. A strong understanding of deep learning frameworks, such as PyTorch, and experience working on distributed training for large models are preferred. If you have a track record of improving research productivity through infrastructure design or process improvements, that&#39;s a plus.</p>\n<p>This role is based in San Francisco, California, and offers a competitive salary range of $350,000 - $475,000 USD per year, depending on background, skills, and experience. We sponsor visas and offer generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b79d9627-55a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013932008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD per year","x-skills-required":["deep learning frameworks","distributed training","high-performance optimizations","reliability, maintainability, and security","scalable infrastructure"],"x-skills-preferred":["past experience working on distributed training for large models","track record of improving research productivity through infrastructure design or process improvements","contributions to open-source ML infrastructure"],"datePosted":"2026-04-18T15:57:59.640Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"deep learning frameworks, distributed training, high-performance optimizations, reliability, maintainability, and security, scalable infrastructure, past experience working on distributed training for large models, track record of improving research productivity through infrastructure design or process improvements, contributions to open-source ML infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d639959-bd7"},"title":"Senior Software Engineer","description":"<p>JOB DESCRIPTION:</p>\n<p>About EarnIn</p>\n<p>EarnIn is a pioneer of earned wage access, offering financial flexibility to individuals living paycheck to paycheck.</p>\n<p>We&#39;re seeking experienced, passionate, and resourceful senior engineers to join our backend teams. As a backend engineer, you will work cross-functionally with various teams and contribute to the design and development of our backend services.</p>\n<p>This position will be a hybrid role based in our Bengaluru office, as part of our expanding site presence, with 2 days per week in the office. EarnIn offers excellent benefits for our employees, including healthcare, internet and cell phone reimbursement, a learning and development stipend, and potential opportunities to travel to our headquarters in Mountain View.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Design and implement features robust enough to support our rapid expansion.</li>\n</ul>\n<ul>\n<li>Drive the implementation of new features by breaking complex problems down to their essentials, translating that complexity into elegant design, and creating high-quality, maintainable code.</li>\n</ul>\n<ul>\n<li>Create and maintain test automation to enable continuous integration and development velocity.</li>\n</ul>\n<ul>\n<li>Design &amp; deliver thoughtfully crafted REST APIs to drive the interactions between our client applications and backend services.</li>\n</ul>\n<ul>\n<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>\n</ul>\n<ul>\n<li>Work cross-functionally with other teams (data science, design, product, marketing, analytics).</li>\n</ul>\n<ul>\n<li>Leverage a broad skill set and help us implement and learn new technologies quickly.</li>\n</ul>\n<ul>\n<li>Provide and receive design and implementation evaluations and improve with each iteration.</li>\n</ul>\n<ul>\n<li>Debug production issues across our services infrastructure and multiple levels of our stack.</li>\n</ul>\n<ul>\n<li>Think about distributed systems &amp; services, and care passionately about producing high-quality code.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>4+ years of development experience in Software Engineering</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s, Master’s, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>\n</ul>\n<ul>\n<li>Proficient in at least one modern programming language such as C#, Java, Python, Go, and Scala.</li>\n</ul>\n<ul>\n<li>Hands-on experience working with various databases (DynamoDB, MySQL, ElasticSearch) and data pipeline technologies.</li>\n</ul>\n<ul>\n<li>Experience with continuous integration and delivery tools.</li>\n</ul>\n<ul>\n<li>Experienced in developing and executing functional and integration tests.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills.</li>\n</ul>\n<ul>\n<li>Ability to thrive in a fast-paced, dynamic environment and have a bias towards action and results.</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes, microservices, and event-driven architecture is a strong plus.</li>\n</ul>\n<ul>\n<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d639959-bd7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7542937","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C#","Java","Python","Go","Scala","DynamoDB","MySQL","ElasticSearch","continuous integration","delivery tools","functional and integration tests","REST APIs","distributed systems & services"],"x-skills-preferred":["Kubernetes","microservices","event-driven architecture","AI-assisted development tools"],"datePosted":"2026-04-18T15:57:58.311Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C#, Java, Python, Go, Scala, DynamoDB, MySQL, ElasticSearch, continuous integration, delivery tools, functional and integration tests, REST APIs, distributed systems & services, Kubernetes, microservices, event-driven architecture, AI-assisted development tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7dc0b69a-5b8"},"title":"Senior Engineer, Storage Control Plane","description":"<p>We&#39;re looking for a Senior Storage Engineer to play a key role in designing, building, and operating the control plane for our high-performance AI storage platform. You&#39;ll help evolve CoreWeave&#39;s storage systems by building reliable, scalable, and high-throughput solutions that power some of the largest and innovative AI workloads in the world.</p>\n<p>This role involves close collaboration with teams across infrastructure, compute, and platform to ensure our storage services scale automatically and seamlessly while maximizing performance and reliability.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design and implement a highly scalable multi-tenant control plane that supports CoreWeave&#39;s growing AI storage and cloud infrastructure needs.</li>\n<li>Contribute to the development of exabyte-scale, S3-compatible object storage, distributed file system and integrate dedicated storage clusters into diverse customer environments.</li>\n<li>Work with technologies such as RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, and distributed filesystems to optimize storage performance and efficiency.</li>\n<li>Participate in efforts to improve the reliability, durability, and observability of our storage stack.</li>\n<li>Collaborate with operations teams to monitor, analyze, and optimize storage systems using telemetry, metrics, and dashboards to improve performance, latency, and resilience.</li>\n<li>Work cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack.</li>\n<li>Share your knowledge and mentor other engineers on best practices in building distributed, high-performance systems.</li>\n</ul>\n<p>The ideal candidate will have:</p>\n<ul>\n<li>A Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>\n<li>6–10 years of experience working in storage systems engineering or infrastructure.</li>\n<li>Strong hands-on experience with object storage or distributed filesystems in production environments.</li>\n<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar.</li>\n<li>Proficiency in a systems programming language such as Go, C, or Rust.</li>\n<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana).</li>\n<li>Solid understanding of cloud-native infrastructure, Kubernetes, and scalable system architecture.</li>\n<li>Strong debugging and problem-solving skills in distributed, high-performance environments.</li>\n<li>Clear communicator, able to work collaboratively across teams and share technical insights effectively.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7dc0b69a-5b8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4611874006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $204,000","x-skills-required":["object storage","distributed filesystems","RDMA","GPU Direct Storage","RoCE","InfiniBand","SPDK","cloud-native infrastructure","Kubernetes","scalable system architecture"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:57.450Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"object storage, distributed filesystems, RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, cloud-native infrastructure, Kubernetes, scalable system architecture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":204000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bb1344c-662"},"title":"Sr. Solutions Engineer, Retail - CPG","description":"<p>We are looking for a Senior Solutions Engineer to join our team. As a Senior Solutions Engineer, you will work with large enterprises in the Retail and CPG space to help them become more data-driven. You will define and direct the technical strategy for our largest and most important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>\n<p>You will work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives. You will also work with a team of engineers to build proofs of concept and demonstrate our products.</p>\n<p>The ideal candidate will have a strong background in value selling, technical account management, and technical leadership. They will also have a solid understanding of big data, data science, and cloud technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define and direct the technical strategy for our largest and most important accounts</li>\n<li>Work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives</li>\n<li>Collaborate with a team of engineers to build proofs of concept and demonstrate our products</li>\n<li>Provide technical guidance and support to customers</li>\n<li>Work with customers to identify and address technical issues</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience working with large enterprises in the Retail and CPG space</li>\n<li>3+ years of experience in a pre-sales capacity or supporting sales activity</li>\n<li>Strong background in value selling, technical account management, and technical leadership</li>\n<li>Solid understanding of big data, data science, and cloud technologies</li>\n<li>Experience with design and implementation of big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Production programming experience in Python, R, Scala, or Java</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bb1344c-662","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7507778002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["big data","data science","cloud technologies","Hadoop","NoSQL","MPP","OLTP","OLAP","Python","R","Scala","Java"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:57:56.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud technologies, Hadoop, NoSQL, MPP, OLTP, OLAP, Python, R, Scala, Java, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2f6f807-fc6"},"title":"Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>We are looking for a software engineer to join our team as a founding member of our Belgrade site. As a software engineer, you will be involved in the entire development cycle and exemplify all core Databricks values.</p>\n<p>The responsibilities you will have:</p>\n<ul>\n<li>Drive requirements clarity and design decisions for ambiguous problems</li>\n<li>Produce technical design documents and project plans</li>\n<li>Develop new features</li>\n<li>Mentor more junior engineers</li>\n<li>Test and rollout to production, monitoring</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science or equivalent practical experience in databases or distributed systems</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Motivated by delivering customer value and impact</li>\n<li>3+ years of production level experience in either Java, Scala or C++</li>\n<li>Solid foundation in algorithms and data structures and their real-world use cases</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop)</li>\n</ul>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2f6f807-fc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012691002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_10290548-1ea"},"title":"Solutions Architect - Public Sector (LEAPS)","description":"<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>\n<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>\n<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_10290548-1ea","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8320126002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Apache Spark","MLflow","Delta Lake","Python","Scala","Java","SQL","R","AWS","Azure","GCP","Data Engineering","Data Warehousing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.145Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Maryland; Virginia; Washington, D.C."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_82e9a289-022"},"title":"Senior Software Engineer  - Application Traffic team","description":"<p>As a Senior Software Engineer on the Application Traffic team, you will design and build the systems that power Databricks&#39; service-to-service communication across thousands of clusters in a multi-cloud environment. You will also help create abstractions that hide networking complexity from product teams, making connectivity, discovery, and reliability seamless by default.</p>\n<p>You&#39;ll work across three key areas that define Databricks&#39; networking stack:</p>\n<p>Ingress Control Plane: Build the control plane for Databricks&#39; global ingress layer. Enable programming of API gateways with static and dynamic endpoints, simplify service onboarding, and make it easy to expose APIs securely across clouds.</p>\n<p>Service-to-Service Communication: Design scalable mechanisms for service discovery and load balancing across thousands of clusters. Provide networking abstractions so product teams don&#39;t need to worry about underlying connectivity details.</p>\n<p>Overload Protection: Build intelligent rate limiting and admission control systems to protect critical services under high load. Ensure reliability and predictable performance for both customer-facing and internal workloads.</p>\n<p>We&#39;re looking for someone with a strong proficiency in one or more languages such as Java, Scala, Go, or C++, and experience with service-oriented architectures and large scale distributed systems. Familiarity with cloud platforms (AWS, Azure, GCP) and container/orchestration technologies (Kubernetes, Docker) is also required. A track record of shipping infrastructure that supports mission-critical workloads at scale is essential.</p>\n<p>The pay range for this role is $166,000-$225,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_82e9a289-022","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8183195002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","Go","C++","service-oriented architectures","large scale distributed systems","cloud platforms","container/orchestration technologies"],"x-skills-preferred":["service discovery","DNS","load balancing","Envoy"],"datePosted":"2026-04-18T15:57:51.589Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Go, C++, service-oriented architectures, large scale distributed systems, cloud platforms, container/orchestration technologies, service discovery, DNS, load balancing, Envoy","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1be4c11-417"},"title":"Senior Research Scientist, Reward Models","description":"<p>As a Senior Research Scientist on our Reward Models team, you&#39;ll lead research efforts to improve how we specify and learn human preferences at scale. Your work will directly shape how our models understand and optimize for what humans actually want , enabling Claude to be more useful, more reliable, and better aligned with human values.</p>\n<p>This role focuses on pushing the frontier of reward modeling for large language models. You&#39;ll develop novel architectures and training methodologies for RLHF, research new approaches to LLM-based evaluation and grading (including rubric-based methods), and investigate techniques to identify and mitigate reward hacking. You&#39;ll collaborate closely with teams across Anthropic, including Finetuning, Alignment Science, and our broader research organization, to ensure your work translates into concrete improvements in both model capabilities and safety.</p>\n<p>We&#39;re looking for someone who can drive ambitious research agendas while also shipping practical improvements to production systems. You&#39;ll have the opportunity to work on some of the most important open problems in AI alignment, with access to frontier models and significant computational resources. Your work will directly advance the science of how we train AI systems to be both highly capable and safe.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead research on novel reward model architectures and training approaches for RLHF</li>\n<li>Develop and evaluate LLM-based grading and evaluation methods, including rubric-driven approaches that improve consistency and interpretability</li>\n<li>Research techniques to detect, characterize, and mitigate reward hacking and specification gaming</li>\n<li>Design experiments to understand reward model generalization, robustness, and failure modes</li>\n<li>Collaborate with the Finetuning team to translate research insights into improvements for production training pipelines</li>\n<li>Contribute to research publications, blog posts, and internal documentation</li>\n<li>Mentor other researchers and help build institutional knowledge around reward modeling</li>\n</ul>\n<p>You may be a good fit if you</p>\n<ul>\n<li>Have a track record of research contributions in reward modeling, RLHF, or closely related areas of machine learning</li>\n<li>Have experience training and evaluating reward models for large language models</li>\n<li>Are comfortable designing and running large-scale experiments with significant computational resources</li>\n<li>Can work effectively across research and engineering, iterating quickly while maintaining scientific rigor</li>\n<li>Enjoy collaborative research and can communicate complex ideas clearly to diverse audiences</li>\n<li>Care deeply about building AI systems that are both highly capable and safe</li>\n</ul>\n<p>Strong candidates may also</p>\n<ul>\n<li>Have published research on reward modeling, preference learning, or RLHF</li>\n<li>Have experience with LLM-as-judge approaches, including calibration and reliability challenges</li>\n<li>Have worked on reward hacking, specification gaming, or related robustness problems</li>\n<li>Have experience with constitutional AI, debate, or other scalable oversight approaches</li>\n<li>Have contributed to production ML systems at scale</li>\n<li>Have familiarity with interpretability techniques as applied to understanding reward model behavior</li>\n</ul>\n<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1be4c11-417","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5024835008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$500,000 USD","x-skills-required":["reward modeling","RLHF","LLM-based evaluation and grading","rubric-driven approaches","reward hacking","specification gaming","large-scale experiments","computational resources","research and engineering","collaborative research","complex ideas communication","AI systems development"],"x-skills-preferred":["published research","LLM-as-judge approaches","calibration and reliability challenges","constitutional AI","debate","scalable oversight approaches","production ML systems","interpretability techniques"],"datePosted":"2026-04-18T15:57:50.755Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel Required) | San Francisco, CA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"reward modeling, RLHF, LLM-based evaluation and grading, rubric-driven approaches, reward hacking, specification gaming, large-scale experiments, computational resources, research and engineering, collaborative research, complex ideas communication, AI systems development, published research, LLM-as-judge approaches, calibration and reliability challenges, constitutional AI, debate, scalable oversight approaches, production ML systems, interpretability techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a9a579af-fdc"},"title":"Sr. Solutions Architect","description":"<p>At Databricks, we are seeking a Senior Solutions Architect to join our Field Engineering team. As a key member of our team, you will work closely with customers to understand their complex data challenges and develop customized solutions using our Data Intelligence Platform.</p>\n<p>Our team is responsible for demonstrating the value of our platform to customers and providing them with the necessary expertise to succeed. We are looking for someone who is passionate about data and has a strong technical background in software engineering.</p>\n<p>In this role, you will have the opportunity to work with a variety of customers across different industries and geographies. You will also have the chance to contribute to the development of our technical community engagement initiatives, including customer-facing collateral and workshops.</p>\n<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development. If you are a motivated and experienced software engineer looking for a new challenge, we encourage you to apply.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop customer engagement strategies in partnership with Account Executive(s) in your designated territory.</li>\n<li>Coach junior Solutions Architects and teams on use case prioritization and building technical champions.</li>\n<li>Influence stakeholders at all levels through complex engagements with the wider cloud ecosystem and 3rd party applications, ensuring they are excited by the Databricks vision and solution strategy.</li>\n<li>Be a &#39;champion’ for both customers and colleagues, operating as an expert solution architect and trusted advisor for significant data analytics architecture, design, and adoption of the Databricks Data Intelligence Platform.</li>\n<li>Contribute to Databricks&#39; technical community engagement by developing customer-facing collateral and leading workshops, seminars, and meet-ups.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Know how to engage in complex customer interactions and sales lifecycle in a technical pre-sales capacity.</li>\n<li>Ability to influence decision-makers and C-level executives by developing relationships and orchestrating teams to achieve long-term customer success.</li>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark.</li>\n<li>Hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>\n<li>Know how to provide technical solutions for specialized customer needs and navigate a competitive landscape.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive benefits and perks package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Competitive salary</li>\n<li>Flexible working hours</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p>We are an equal opportunities employer and welcome applications from all qualified candidates.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a9a579af-fdc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8194862002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Spark","Cloud computing","Data analytics","Software engineering"],"x-skills-preferred":["Machine learning","Data science","Cloud architecture","DevOps","Agile methodologies"],"datePosted":"2026-04-18T15:57:47.794Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sydney, Australia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Spark, Cloud computing, Data analytics, Software engineering, Machine learning, Data science, Cloud architecture, DevOps, Agile methodologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5196c4ac-d97"},"title":"Senior Software Engineer - Infrastructure and Tools","description":"<p>We are seeking a Senior Software Engineer to join our Infrastructure teams. As a key member of our team, you will build scalable systems to power the Databricks platform, making it the de-facto platform for running Big Data and AI workloads.</p>\n<p>Your responsibilities will include building and extending components of the core Databricks infrastructure, architecting multi-cloud systems and abstractions to allow the Databricks product to run on top of existing Cloud providers, improving software development workflows for engineering and operational efficiency, using our own data and AI platform to analyze build and test logs and metrics to identify areas for improvement, developing automated build, test, and release infrastructures, and setting and upholding the standard for engineering processes to support high-quality engineering.</p>\n<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 5+ years of experience writing production code in one of Java, Scala, Go, C++, or Python. You should also have passion for building highly scalable and reliable infrastructure, experience architecting, developing, and deploying large-scale distributed systems at scale, and experience with cloud APIs and cloud technologies such as AWS, Azure, GCP, Docker, Kubernetes, or Terraform.</p>\n<p>In addition to a competitive salary, we offer comprehensive health coverage, 401(k) plan, equity awards, flexible time off, paid parental leave, family planning, gym reimbursement, annual personal development fund, work headphones reimbursement, employee assistance program, and business travel accident insurance.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5196c4ac-d97","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6318503002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","Go","C++","Python","Cloud APIs","Cloud technologies","AWS","Azure","GCP","Docker","Kubernetes","Terraform"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:44.136Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Go, C++, Python, Cloud APIs, Cloud technologies, AWS, Azure, GCP, Docker, Kubernetes, Terraform","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ef18c75a-f72"},"title":"Solutions Architect, Applied AI (Commercial)","description":"<p>As a Solutions Architect, Applied AI (Commercial), you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping customers understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack.\\n\\nYou&#39;ll combine your technical depth with customer-facing skills to architect innovative LLM solutions that address complex business challenges while maintaining our high standards for safety and reliability.\\n\\nAs a Commercial Solutions Architect, you&#39;ll go deep with priority accounts as a hands-on builder, while creating reusable blueprints, demos, and enablement that extend Claude&#39;s reach across the broader Commercial book of business.\\n\\nWorking closely with our Sales, Product, and Engineering teams, you&#39;ll guide customers from initial technical discovery through successful deployment. You&#39;ll leverage your expertise to help customers understand Claude&#39;s capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.\\n\\nResponsibilities:\\n\\n<em> Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation\\n\\n</em> Serve as the primary technical advisor to customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams and stakeholders to drive customer success\\n\\n<em> Support customers building with the Claude API, Claude Code, and Claude for Enterprise\\n\\n</em> Ship working code. Build prototypes and proof-of-concepts hands-on, develop eval frameworks, and write near-production examples that customers can extend\\n\\n<em> Build reusable blueprints, demos, and enablement assets that scale across customers\\n\\n</em> Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack\\n\\n<em> Help customers develop evaluation frameworks to measure Claude&#39;s performance for their specific use cases\\n\\n</em> Identify common integration patterns and contribute insights back to our Product and Engineering teams\\n\\n<em> Travel occasionally to customer sites for workshops, technical deep dives, and relationship building\\n\\n</em> Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns\\n\\nYou may be a good fit if you have:\\n\\n<em> 3+ years of highly technical experience as a software engineer (or equivalent) with some customer-facing exposure, OR 3+ years as a Solutions Architect, Sales Engineer, or Technical Account Manager with strong hands-on building experience\\n\\n</em> A builder identity. You&#39;ve shipped real software, you have technical taste, and you care about the craft of what you build\\n\\n<em> A systems mindset. When you see a problem, your instinct is to ask &quot;how do I make this reusable.&quot; You&#39;d rather build one thing that serves ten customers than ten things that serve one each\\n\\n</em> Strong coding ability. You ship prototypes regularly and can work in a real codebase, not just notebooks. Comfort with Python expected\\n\\n<em> Strong ability to build trust with technical stakeholders and adjust your communication for varied audiences\\n\\n</em> Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders\\n\\n<em> Experience designing scalable cloud architectures and integrating with enterprise systems\\n\\n</em> Familiarity with common LLM frameworks and tools, or a background in machine learning or data science\\n\\n<em> Comfort operating in early-stage, ambiguous environments where the playbook doesn&#39;t exist yet, and a track record of building structure as you go\\n\\n</em> Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities\\n\\n<em> A love of teaching, mentoring, and helping others succeed\\n\\n</em> Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems\\n\\nThe annual compensation range for this role is listed below.\\n\\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\n\\nAnnual Salary: $240,000-$315,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ef18c75a-f72","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5192805008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$240,000-$315,000 USD","x-skills-required":["Python","LLM frameworks","Machine learning","Data science","Cloud architectures","Enterprise systems","Technical communication","Customer-facing skills","Technical depth"],"x-skills-preferred":["Sales engineering","Technical account management","Scalable cloud architectures","Integration with enterprise systems","Common LLM frameworks and tools"],"datePosted":"2026-04-18T15:57:36.898Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, LLM frameworks, Machine learning, Data science, Cloud architectures, Enterprise systems, Technical communication, Customer-facing skills, Technical depth, Sales engineering, Technical account management, Scalable cloud architectures, Integration with enterprise systems, Common LLM frameworks and tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ba129b2-e3a"},"title":"Solutions Architect (Hong-Kong)","description":"<p>We are seeking a Solutions Architect to join our Field Engineering team in Singapore. As a Solutions Architect, you will be responsible for demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges. You will work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in Hong Kong to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect.</li>\n<li>Gain excitement from clients about Databricks through hands-on evaluation and Apache Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>\n<li>Contribute to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>\n<li>Become a Big Data Analytics advisor on aspects of architecture and design.</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications.</li>\n<li>Develop both technically and in the pre-sales aspect with the goal of becoming an independently operating Solutions Architect.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Familiarity working with clients, creating a narrative, answering customer questions, aligning the agenda with important interests, and achieving tangible outcomes.</li>\n<li>Ability to independently deliver a technical proposition, identify customers&#39; pain-points, and explain important areas for business value to develop a trusted advisor skillset.</li>\n<li>Code in a core programming language such as Python, Java, or Scala.</li>\n<li>Knowledgeable in a core Big Data Analytics domain with some exposure to advanced proofs-of-concept and an understanding of a major public cloud platform.</li>\n<li>Experience diving deeper into solution architecture and design.</li>\n<li>Proficiency in Cantonese is required as this role serves clients based in Hong Kong and involves direct customer communications in Cantonese</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ba129b2-e3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437010002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Python","Java","Scala","Big Data Analytics","Cloud Computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:32.290Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Python, Java, Scala, Big Data Analytics, Cloud Computing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22bcbb50-ef4"},"title":"Member of Technical Staff - Data Platform","description":"<p><strong>About the Role</strong></p>\n<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>\n<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>\n<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>\n<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>\n<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>\n<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>\n<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>\n<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>\n<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>\n<li>Strong debugging, profiling, and performance optimisation skills.</li>\n<li>Track record of shipping and maintaining critical infrastructure.</li>\n<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22bcbb50-ef4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.x.com/","logo":"https://logos.yubhub.co/x.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4803862007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Rust","Go","Scala","Kafka","Flink","Spark","Trino","Hadoop","distributed systems","stream processing","large-scale data platforms"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:30.705Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_646a6426-386"},"title":"Member of Technical Staff - X Money","description":"<p>We are seeking a talented Software Engineer to join our X Money team, focused on building a revolutionary global payment network that will serve over 600 million users and rival the world&#39;s largest financial institutions.</p>\n<p>In this role, you will specialise in backend development, designing and optimising robust microservices to ensure scalability, security, and reliability. You will support full-stack efforts, collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop and optimise microservices for high-concurrency transactions using Go, Postgres, and Kafka.</li>\n<li>Collaborate on system architecture, testing, and monitoring to ensure uptime and performance.</li>\n<li>Build internal tools using frontend technologies as needed to support operational efficiency.</li>\n<li>Support the Technical Lead in risk mitigation and align with engineering, product, and compliance teams to drive project success.</li>\n<li>Contribute to the development of secure, scalable systems for handling financial data and transactions.</li>\n<li>Iterate rapidly on feedback to deliver high-quality solutions in a dynamic environment.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>5+ years of software engineering experience, with a strong focus on backend development.</li>\n<li>Proficiency in Go or similar languages and experience with databases (e.g., Postgres) and streaming systems (e.g., Kafka).</li>\n<li>Familiarity with building distributed systems for high-scale, low-latency environments.</li>\n<li>Knowledge of handling secure financial data.</li>\n<li>Ability to contribute to frontend development for internal tools when required.</li>\n<li>Strong communication and problem-solving skills, with a collaborative mindset.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Experience in fintech, payments, or regulatory frameworks (e.g., PCI-DSS, AML/KYC).</li>\n<li>Prior work in a fast-paced, startup-like environment on greenfield projects.</li>\n<li>Comfort navigating ambiguous requirements and iterating based on feedback.</li>\n<li>Passion for leveraging AI to transform financial systems.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_646a6426-386","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5007310007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Postgres","Kafka","backend development","microservices","scalability","security","reliability","distributed systems","financial data","frontend development"],"x-skills-preferred":["fintech","payments","regulatory frameworks","PCI-DSS","AML/KYC","fast-paced environment","greenfield projects","AI transformation"],"datePosted":"2026-04-18T15:57:30.352Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, JP"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Go, Postgres, Kafka, backend development, microservices, scalability, security, reliability, distributed systems, financial data, frontend development, fintech, payments, regulatory frameworks, PCI-DSS, AML/KYC, fast-paced environment, greenfield projects, AI transformation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8871a994-591"},"title":"Machine Learning Engineer, Core Engineering","description":"<p>We&#39;re seeking a talented Machine Learning Engineer to join our Core Engineering team. As a Machine Learning Engineer at Pinterest, you will build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest. You will partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces, while gaining knowledge of how ML works in different areas.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest</li>\n<li>Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas</li>\n<li>Use data-driven methods and leverage the unique properties of our data to improve candidate retrieval</li>\n<li>Work in a high-impact environment with quick experimentation and product launches</li>\n<li>Keep up with industry trends in recommendation systems</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>2+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)</li>\n<li>End-to-end hands-on experience with building data processing pipelines, large-scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)</li>\n<li>Degree in computer science, machine learning, statistics, or related field</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>M.S. or PhD in Machine Learning or related areas</li>\n<li>Publications at top ML conferences</li>\n<li>Experience using Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring</li>\n<li>Familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration</li>\n<li>Expertise in scalable real-time systems that process stream data</li>\n<li>Passion for applied ML and the Pinterest product</li>\n</ul>\n<p>Relocation Statement:</p>\n<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8871a994-591","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/6121450","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$138,905-$285,982 USD","x-skills-required":["machine learning","deep learning","data processing pipelines","large-scale machine learning systems","big data technologies","Hadoop","Spark","natural language processing","reinforcement learning","graph representation learning"],"x-skills-preferred":["Cursor","Copilot","Codex","LLM-powered productivity tools","scalable real-time systems","stream data"],"datePosted":"2026-04-18T15:57:30.186Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, deep learning, data processing pipelines, large-scale machine learning systems, big data technologies, Hadoop, Spark, natural language processing, reinforcement learning, graph representation learning, Cursor, Copilot, Codex, LLM-powered productivity tools, scalable real-time systems, stream data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":138905,"maxValue":285982,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_588dfb0e-611"},"title":"Solutions Architect - Kubernetes","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>\n<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>\n<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>\n<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>\n<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>\n<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>\n<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>\n<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>\n<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>\n<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>\n<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>\n<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>\n<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>\n<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_588dfb0e-611","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4557835006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $220,000","x-skills-required":["Kubernetes","Cloud Computing","High-Performance Compute (HPC)","Distributed Systems","Cloud Infrastructure","Scalable Solutions","NVIDIA GPUs","Infiniband","NVIDIA Collective Communications Library (NCCL)","Slurm","Kubernetes Clusters"],"x-skills-preferred":["Code Contributions to Open-Source Inference Frameworks","Scripting and Automation Related to Kubernetes Clusters and Workloads","Building Solutions Across Multi-Cloud Environments","Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures"],"datePosted":"2026-04-18T15:57:29.779Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_33f60e2b-f34"},"title":"Sr. Solutions Architect - Greenfield (New Logo) France","description":"<p>Job Title: Sr. Solutions Architect - Greenfield (New Logo) France</p>\n<p>We are seeking a Senior Solutions Architect to join our team in Paris. As a Senior Solutions Architect, you will be responsible for providing technical and business value to Databricks customers in collaboration with Account Executives.</p>\n<p>The location for the role should be in the Paris region (i.e. within a commutable distance for a hybrid schedule).</p>\n<p>At Databricks, our core values are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>\n<p>You will be an essential part of this mission, using your technical expertise to demonstrate how our Data &amp; Intelligence Platform can help customers solve their complex data challenges.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Form successful relationships with strategic enterprise clients within the Greenfield territory,</li>\n<li>Operate as an expert in big data analytics to excite customers about Databricks,</li>\n<li>Develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform,</li>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications,</li>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions,</li>\n<li>Develop customer relationships and build internal partnerships with account executives and teams,</li>\n<li>Experience with managing strategic enterprise accounts,</li>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark,</li>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s),</li>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>\n</ul>\n<p>Mandatory requirements:</p>\n<ul>\n<li>The location for the role should be in the Paris region (i.e. within a commutable distance for a hybrid schedule),</li>\n<li>Flexibility to travel (up to 30% as required for customer meetings, events and trainings),</li>\n<li>Business proficiency in both French and English required.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_33f60e2b-f34","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8449356002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["big data analytics","cloud platform","complex proofs-of-concept","core programming language","solution architecture"],"x-skills-preferred":["Spark","Python","Java","Scala"],"datePosted":"2026-04-18T15:57:27.701Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data analytics, cloud platform, complex proofs-of-concept, core programming language, solution architecture, Spark, Python, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d929542f-ab4"},"title":"Senior Software Engineer","description":"<p>We&#39;re seeking experienced senior engineers to join our backend teams. As a backend engineer, you will work cross-functionally with various teams and contribute to the design and development of our backend services.</p>\n<p>This position will be a hybrid role based in our Bengaluru office, with 2 days on-site as part of our expanding site. EarnIn provides excellent benefits for our employees, including healthcare, internet/cell phone reimbursement, a learning and development stipend, and potential opportunities to travel to our Palo Alto HQ.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design &amp; implement features robust enough for our large scale.</li>\n<li>Drive the implementation of new features,break complex problems down to their bare essentials, translate that complexity into elegant design, and create high-quality, maintainable code.</li>\n<li>Create and maintain test automation to enable continuous integration and development velocity.</li>\n<li>Design &amp; deliver thoughtfully crafted REST APIs to drive the interactions between our client applications and backend services.</li>\n<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>\n<li>Work cross-functionally with other teams (data science, design, product, marketing, analytics).</li>\n<li>Leverage a broad skill set and help us implement and learn new technologies quickly.</li>\n<li>Provide and receive design and implementation evaluations and improve with each iteration.</li>\n<li>Debug production issues across our services infrastructure and multiple levels of our stack.</li>\n<li>Think about distributed systems &amp; services and care passionately about producing high-quality code.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of development experience in Software Engineering</li>\n<li>Bachelor&#39;s, Master’s, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>\n<li>Proficient in at least one modern programming language such as C#, Java, Python, Go, and Scala.</li>\n<li>Hands-on experience working with various databases (DynamoDB, MySQL, ElasticSearch) and data pipeline technologies.</li>\n<li>Experience with continuous integration and delivery tools.</li>\n<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs)</li>\n<li>Experienced in developing and executing functional and integration tests.</li>\n<li>Excellent written and verbal communication skills.</li>\n<li>Ability to thrive in a fast-paced, dynamic environment and have a bias towards action and results.</li>\n<li>Experience with Kubernetes, microservices, and event-driven architecture is a strong plus.</li>\n<li>Experience in payments or fintech is a plus.</li>\n<li>Experience with payment processors or internal financial systems is a plus.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d929542f-ab4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7392234","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C#","Java","Python","Go","Scala","DynamoDB","MySQL","ElasticSearch","continuous integration","delivery tools","AI-assisted development tools","functional and integration tests","Kubernetes","microservices","event-driven architecture"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:25.556Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"C#, Java, Python, Go, Scala, DynamoDB, MySQL, ElasticSearch, continuous integration, delivery tools, AI-assisted development tools, functional and integration tests, Kubernetes, microservices, event-driven architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cb18189c-d78"},"title":"Solutions Architect (Pre-sales) - Kansai Region","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>\n<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>\n<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cb18189c-d78","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437028002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","Scala","Java","R","Public Cloud","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:24.678Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_65befd80-0e2"},"title":"Staff Software Engineer","description":"<p>We&#39;re seeking an experienced Staff-level backend software engineer to join our Live Pay team. You&#39;ll work cross-functionally with various teams and contribute to the design and development of key platform services. This person must be strong in JVM languages and event-driven architecture on AWS.</p>\n<p>The Canada base salary range for this full-time position is $252,000-$308,000, plus equity and benefits. Our salary ranges are determined by role, level, and location. This role will be hybrid from our Vancouver, CAN office, with 2 days a week in the office required.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Drive the design and implementation of new features. Break down complex problems into their bare essentials, translate this complexity into elegant design, and create high-quality, clean code.</li>\n</ul>\n<ul>\n<li>Make a meaningful impact on the lives of our community members.</li>\n</ul>\n<ul>\n<li>Design, develop, and deliver large-scale systems.</li>\n</ul>\n<ul>\n<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>\n</ul>\n<ul>\n<li>Contribute to defining technical direction, planning the roadmap, escalating issues, and synthesizing feedback to ensure team success.</li>\n</ul>\n<ul>\n<li>Estimate and manage team project timelines and risks.</li>\n</ul>\n<ul>\n<li>Care passionately about producing high-quality, efficient designs and code.</li>\n</ul>\n<ul>\n<li>Constantly learning about new technologies and industry standards in software engineering.</li>\n</ul>\n<ul>\n<li>Work cross-functionally with other teams, including: Analytics, design, product, marketing, and data science.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of development experience in backend software development</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s, Master’s, or PhD in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>\n</ul>\n<ul>\n<li>Proficiency in at least one modern programming language, such as Java, Kotlin, Scala, or C#, and experience with at least one major framework such as Spring, Spring Boot, or ASP.NET Core.</li>\n</ul>\n<ul>\n<li>Hands-on experience working in cloud environments: AWS, GCP, or Azure</li>\n</ul>\n<ul>\n<li>Proficiency in event-driven systems such as Kafka, SQS, SNS, or Kinesis, and experience designing and operating scalable distributed systems.</li>\n</ul>\n<ul>\n<li>Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations</li>\n</ul>\n<ul>\n<li>Hands-on experience working with various databases. DynamoDB, MySQL, ElasticSearch</li>\n</ul>\n<ul>\n<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) to improve engineering productivity</li>\n</ul>\n<ul>\n<li>Experience with continuous integration and delivery tools, and experience in developing and executing functional and integration tests.</li>\n</ul>\n<ul>\n<li>Familiarity with a clean architecture approach and software craftsmanship</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes and microservice architecture is a strong plus.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_65befd80-0e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"EarnIn","sameAs":"https://www.earnin.com/","logo":"https://logos.yubhub.co/earnin.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/earnin/jobs/7680387","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$308,000","x-skills-required":["Java","Kotlin","Scala","C#","Spring","Spring Boot","ASP.NET Core","AWS","GCP","Azure","Kafka","SQS","SNS","Kinesis","DynamoDB","MySQL","ElasticSearch","AI-assisted development tools","Continuous integration and delivery tools","Clean architecture approach","Software craftsmanship","Kubernetes","Microservice architecture"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:22.668Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Kotlin, Scala, C#, Spring, Spring Boot, ASP.NET Core, AWS, GCP, Azure, Kafka, SQS, SNS, Kinesis, DynamoDB, MySQL, ElasticSearch, AI-assisted development tools, Continuous integration and delivery tools, Clean architecture approach, Software craftsmanship, Kubernetes, Microservice architecture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":308000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2895081b-eab"},"title":"Sr. Specialist Solutions Architect","description":"<p>As a Sr. Specialist Solutions Architect, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark and expertise in other data technologies.</p>\n<p>Your responsibilities will include providing technical leadership to guide strategic customers to successful implementations on big data projects, architecting production-level data pipelines, becoming a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows, assisting Solution Architects with more advanced aspects of the technical sale, and contributing to the Databricks Community.</p>\n<p>To succeed in this role, you will need to have a strong background in software engineering and data engineering, with expertise in at least one of the following areas: software engineering/data engineering, data applications engineering, or deep specialty expertise in areas such as scaling big data workloads, migrating Hadoop workloads to the public cloud, or experience with large-scale data ingestion pipelines and data migrations.</p>\n<p>You will also need to have a bachelor&#39;s degree in computer science, information systems, engineering, or equivalent experience through work experience, production programming experience in SQL and Python, Scala, or Java, and 2 years of professional experience with Big Data technologies and architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2895081b-eab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499576002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Big Data technologies","Data engineering","Data lake technology","Data streaming","Data ingestion and workflows","Python","Scala","Java","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.553Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Big Data technologies, Data engineering, Data lake technology, Data streaming, Data ingestion and workflows, Python, Scala, Java, SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd67fe82-1c8"},"title":"Solutions Architect : Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd67fe82-1c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8346277002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data technologies","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.281Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data technologies, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9336a065-82b"},"title":"Senior Software Engineer (Backend)","description":"<p>As a Senior Software Engineer (Backend) at Databricks, you will work with your team to build infrastructure for the Databricks platform at scale.</p>\n<p>The impact you&#39;ll have:</p>\n<p>Our backend teams cover a diverse range of domains, from core compute fabric resource management to service platforms and infrastructure.</p>\n<p>For example, you might work on challenges such as:</p>\n<ul>\n<li>Supporting Databricks&#39; growth by building foundational infrastructure platforms that enable seamless operation across numerous geographic regions and cloud providers.</li>\n</ul>\n<ul>\n<li>Implementing cloud-agnostic infrastructure abstractions to help Databricks engineers more efficiently manage and operate their services.</li>\n</ul>\n<ul>\n<li>Develop tools and processes that drive engineering efficiency at Databricks.</li>\n</ul>\n<p>We enhance the developer experience for Databricks engineers across various areas, including programming languages, linters, static analysis, IDEs, remote development environments, automated release pipelines, and test automation frameworks.</p>\n<p>Our current focus is on optimizing the Rust development experience across the organization.</p>\n<p>What we look for:</p>\n<ul>\n<li>BS (or higher) in Computer Science, or a related field</li>\n</ul>\n<ul>\n<li>6+ years of production level experience in one of: Python, Java, Scala, C++, or similar language.</li>\n</ul>\n<ul>\n<li>Experience developing large-scale distributed systems from scratch</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n</ul>\n<ul>\n<li>Proficiency in one or more backend languages such as Java, Scala, or Go.</li>\n</ul>\n<ul>\n<li>Hands-on experience in developing and operating backend systems.</li>\n</ul>\n<ul>\n<li>Ability to contribute effectively throughout all project phases, from initial design and development to implementation and ongoing operations, with guidance from senior team members.</li>\n</ul>\n<p>About Databricks:</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion:</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance:</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9336a065-82b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6709301002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","Rust","Go","backend languages","distributed systems","SaaS platform","Service-Oriented Architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:10.210Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, Rust, Go, backend languages, distributed systems, SaaS platform, Service-Oriented Architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a2ea62c-943"},"title":"Research Engineer, Infrastructure, RL Systems","description":"<p>We&#39;re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models through reinforcement learning.</p>\n<p>This role sits at the intersection of research and large-scale systems engineering: a builder who understands both the algorithms behind RL and the realities of distributed training and inference at scale. You&#39;ll wear many hats, from optimising rollout and reward pipelines to enhancing reliability, observability, and orchestration, collaborating closely with researchers and infra teams to make reinforcement learning stable, fast, and production-ready.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and optimise the infrastructure that powers large-scale reinforcement learning and post-training workloads.</li>\n</ul>\n<ul>\n<li>Improve the reliability and scalability of RL training pipeline, distributed RL workloads, and training throughput.</li>\n</ul>\n<ul>\n<li>Develop shared monitoring and observability tools to ensure high uptime, debuggability, and reproducibility for RL systems.</li>\n</ul>\n<ul>\n<li>Collaborate with researchers to translate algorithmic ideas into production-grade training pipelines.</li>\n</ul>\n<ul>\n<li>Build evaluation and benchmarking infrastructure that measures model progress on helpfulness, safety, and factuality.</li>\n</ul>\n<ul>\n<li>Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.</li>\n</ul>\n<p>We&#39;re looking for someone with strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases. You should have a good understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</p>\n<p>Experience training or supporting large-scale language models with tens of billions of parameters or more is a plus. Familiarity with monitoring and observability tools (Prometheus, Grafana, OpenTelemetry) is also a plus.</p>\n<p>Logistics:</p>\n<ul>\n<li>Location: This role is based in San Francisco, California.</li>\n</ul>\n<ul>\n<li>Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We sponsor visas. While we can&#39;t guarantee success for every candidate or role, if you&#39;re the right fit, we&#39;re committed to working through the visa process together.</li>\n</ul>\n<ul>\n<li>Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a2ea62c-943","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachineslab.com/","logo":"https://logos.yubhub.co/thinkingmachineslab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013930008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["deep learning frameworks","PyTorch","JAX","complex codebases","scalable AI infrastructure","large-scale language models","monitoring and observability tools"],"x-skills-preferred":["experience training or supporting large-scale language models","familiarity with monitoring and observability tools"],"datePosted":"2026-04-18T15:56:59.642Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"deep learning frameworks, PyTorch, JAX, complex codebases, scalable AI infrastructure, large-scale language models, monitoring and observability tools, experience training or supporting large-scale language models, familiarity with monitoring and observability tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_07b35bd1-4bf"},"title":"Forward Deployed AI Engineering Manager, GenAI Applications","description":"<p>At Scale AI, we are not just building AI tools. We are pioneering the next era of enterprise AI.</p>\n<p>As businesses rush to harness the potential of Generative AI, Scale is leading the way, transforming workflows, automating complex processes, and driving real-world impact for the world’s largest enterprises and government organizations.</p>\n<p>Our Scale Generative AI Platform (SGP) powers production-grade GenAI applications with foundational services, APIs, and infrastructure that accelerate adoption across industries.</p>\n<p>We are looking for a technical and strategic Engineering Manager to lead our European FDE team.</p>\n<p>This is a high-ownership role at a pivotal moment. You will be responsible for delivering high-impact GenAI solutions in production, leading a team that works directly with customers, and ensuring we solve real problems with clarity, speed, and excellence.</p>\n<p>Why this role is unique:</p>\n<ul>\n<li>Right place, right time: We are moving from prototypes to production at scale. Our FDE team is on the front lines of this transition, helping customers adopt AI faster and with more confidence.</li>\n</ul>\n<ul>\n<li>Customer-first mindset: You will foster a culture of deep customer empathy and practical problem-solving. From scoping use cases to shipping solutions, your team will be responsible for every step of the delivery lifecycle.</li>\n</ul>\n<ul>\n<li>Strategic influence: The lessons from forward-deployed efforts directly inform our core product roadmap. You will work closely with Product and Platform teams to identify patterns, prioritize improvements, and shape the evolution of SGP.</li>\n</ul>\n<ul>\n<li>Operational excellence: You will bring structure to delivery, improve execution, and scale our engineering operations in a fast-moving environment.</li>\n</ul>\n<p>This is a rare opportunity to help define how the next generation of AI applications is built and deployed.</p>\n<p>If you are excited by the pace of innovation in GenAI, passionate about solving real-world problems, and ready to lead a team that is redefining enterprise AI delivery, we want to hear from you.</p>\n<p>At Scale, we do not just follow AI breakthroughs. We deliver them. Join us and be part of the team shaping the future of AI in the enterprise.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_07b35bd1-4bf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4589592005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","Generative AI","cloud infrastructure","DevOps","scalable platform architecture","strategic thinking","operational rigor","communication and collaboration skills"],"x-skills-preferred":["hands-on experience building or deploying AI-powered systems","model behavior shapes user experience","leadership presence"],"datePosted":"2026-04-18T15:56:54.568Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany; London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, Generative AI, cloud infrastructure, DevOps, scalable platform architecture, strategic thinking, operational rigor, communication and collaboration skills, hands-on experience building or deploying AI-powered systems, model behavior shapes user experience, leadership presence"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e2ce28cb-e68"},"title":"Manager - Technical Support","description":"<p>Are you passionate about operational excellence and developing high-performing technical teams? As a Technical Support Manager at Cloudflare, you will lead a team of talented engineers to deliver exceptional support experiences, meet and exceed KPIs, and ensure Cloudflare&#39;s customers receive the highest level of service.</p>\n<p>Drive Operational Excellence - Own and monitor daily operations, ensuring adherence to SLAs, KPIs, and performance metrics (response time, resolution time, customer satisfaction, backlog health, etc.).</p>\n<p>Lead and Develop a High-Performing Team - Manage, coach, and mentor Support Engineers to achieve their potential and elevate technical excellence.</p>\n<p>Handle and Prevent Escalations - Act as the escalation point for critical incidents (P1/P2), ensuring prompt response, coordination, and resolution.</p>\n<p>Elevate Technical Excellence - Be a hands-on technical leader capable of reviewing and advising on complex cases across networking, DNS, WAF, Zero Trust, and performance.</p>\n<p>Collaborate Cross-Functionally - Partner with Engineering, Product Management, and Customer Success to surface recurring issues and influence product improvements.</p>\n<p>Requirements:</p>\n<ul>\n<li>10+ years of experience in technical support or operations within a SaaS, PaaS, or cloud-based enterprise environment.</li>\n</ul>\n<ul>\n<li>3+ years of people management experience leading technical teams of 5+ engineers across multiple locations.</li>\n</ul>\n<ul>\n<li>Proven record of meeting or exceeding operational KPIs and driving continuous improvement.</li>\n</ul>\n<ul>\n<li>Strong technical foundation with deep understanding of: Internet technologies, troubleshooting tools, and experience managing 24x7 global support operations and incident escalation frameworks.</li>\n</ul>\n<ul>\n<li>Exceptional communication and stakeholder management skills; able to translate technical issues into business impact.</li>\n</ul>\n<ul>\n<li>A data-driven mindset: confident using metrics to guide performance, planning, and process improvements.</li>\n</ul>\n<ul>\n<li>Passion for developing people, scaling teams, and creating a culture of excellence.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience supporting security, CDN, Zero Trust, or performance optimization products.</li>\n</ul>\n<ul>\n<li>Fluency in one of the following languages: Mandarin, Korean, Japanese</li>\n</ul>\n<ul>\n<li>Prior experience in start-up or hyper-growth environments where agility and innovation are key.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e2ce28cb-e68","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7601717","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Internet technologies","Troubleshooting tools","Experience managing 24x7 global support operations and incident escalation frameworks","Exceptional communication and stakeholder management skills","Data-driven mindset"],"x-skills-preferred":["Security","CDN","Zero Trust","Performance optimization products","Mandarin","Korean","Japanese"],"datePosted":"2026-04-18T15:56:53.567Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Internet technologies, Troubleshooting tools, Experience managing 24x7 global support operations and incident escalation frameworks, Exceptional communication and stakeholder management skills, Data-driven mindset, Security, CDN, Zero Trust, Performance optimization products, Mandarin, Korean, Japanese"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8cc69c5b-136"},"title":"Manager, Personalized Support","description":"<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>\n<p>Join the team as Twilio&#39;s next Manager, Personalized Support.</p>\n<p>The Personalized Support Manager is responsible for the performance of Twilio&#39;s products and will manage a team of global Technical Account Managers while working in United States Eastern and Central Time-Zones.</p>\n<p>In this role, you&#39;ll:</p>\n<ul>\n<li>Lead a team of Technical Account Managers (TAMs) who are the designated technical contacts for Twilio&#39;s strategic customers.</li>\n<li>As the Personalized Support Manager, you will be the coach and leader for a team of TAMs; bringing out the best in each of your team members with keen interest in their overall well being.</li>\n<li>You are ready to dig deep and address technical questions as well as be able to zoom out and look at the larger picture.</li>\n<li>A large part of your role involves understanding customer roadblocks, pain points and advocating in a data driven way with product management and engineering teams to enhance the customer experience and delight Twilio customers.</li>\n<li>You will also be working alongside our Sales teams to manage customer accounts.</li>\n</ul>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>\n<p>We are always looking for people who will bring something new to the table!</p>\n<p>We think big. Do you?</p>\n<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things.</p>\n<p>That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic.</p>\n<p>Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>\n<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8cc69c5b-136","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7813302","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["6+ years of experience as part of a support or operations team in a software or SaaS company","4+ years experience leading a technical support team in a software or SaaS company","Exceptional emotional intelligence, interpersonal communication and professional writing skills","Demonstrated history of driving complex product issue resolutions including escalation management, internal and external strategy planning through to solution delivery","Ability to lead a team while effectively developing and achieving the desired performance outcomes"],"x-skills-preferred":["Knowledge of networking protocols, standards, troubleshooting and cloud computing","Working knowledge of P/L, expense, cost, resource and risk management in enterprise portfolios","Experience working with Salesforce, JIRA, Confluence, Airtable, Zendesk and other project tools"],"datePosted":"2026-04-18T15:56:53.161Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Colombia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"6+ years of experience as part of a support or operations team in a software or SaaS company, 4+ years experience leading a technical support team in a software or SaaS company, Exceptional emotional intelligence, interpersonal communication and professional writing skills, Demonstrated history of driving complex product issue resolutions including escalation management, internal and external strategy planning through to solution delivery, Ability to lead a team while effectively developing and achieving the desired performance outcomes, Knowledge of networking protocols, standards, troubleshooting and cloud computing, Working knowledge of P/L, expense, cost, resource and risk management in enterprise portfolios, Experience working with Salesforce, JIRA, Confluence, Airtable, Zendesk and other project tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9b06b007-600"},"title":"Senior Software Engineer (App-Framework)","description":"<p>We are seeking a highly skilled and experienced Senior Software Engineer with a deep understanding of low-level systems to join our team. In this role, you will be instrumental in designing, developing, and optimizing application frameworks that form the building blocks for all software development at Databricks.</p>\n<p>This is not a full-stack role; your expertise will be focused on the intricate details of system internals, performance, and efficiency.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain core system infrastructure and low-level software components.</li>\n<li>Optimize system performance, reliability, and scalability through meticulous analysis and innovative solutions.</li>\n<li>Work with JVM internals, memory management, concurrency, and distributed systems.</li>\n<li>Collaborate with other senior engineers and architects to define technical strategies and roadmaps.</li>\n<li>Mentor junior engineers and contribute to a culture of technical excellence.</li>\n<li>Participate in code reviews, design discussions, and architectural decision-making.</li>\n<li>Troubleshoot complex system issues and provide effective resolutions.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Electrical Engineering, or a related field.</li>\n<li>7+ years of professional experience in software development.</li>\n<li>Deep understanding of Java Virtual Machine (JVM) internals, including garbage collection, JIT compilation, class loading, and memory model.</li>\n<li>Proficiency in at least one JVM language and extensive experience with its runtime environment.</li>\n<li>Strong programming skills in Scala/Java/Rust or other systems-level languages.</li>\n<li>Extensive experience with operating system concepts; including processes, threads, concurrency, scheduling, and I/O.</li>\n<li>Proven track record of building and optimizing high-performance, scalable, and reliable systems.</li>\n<li>Experience with distributed systems concepts and technologies.</li>\n<li>Excellent problem-solving, analytical, and debugging skills.</li>\n<li>Strong communication and collaboration abilities</li>\n<li>Experience with performance profiling and tuning tools.</li>\n<li>Contributions to open-source projects related to JVM or systems software.</li>\n<li>Experience with RPC frameworks.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9b06b007-600","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8294304002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java Virtual Machine (JVM) internals","garbage collection","JIT compilation","class loading","memory model","Scala","Java","Rust","operating system concepts","processes","threads","concurrency","scheduling","I/O","distributed systems","RPC frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:49.155Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java Virtual Machine (JVM) internals, garbage collection, JIT compilation, class loading, memory model, Scala, Java, Rust, operating system concepts, processes, threads, concurrency, scheduling, I/O, distributed systems, RPC frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ac82fe49-5bc"},"title":"Technical Deployment, Applied AI","description":"<p>As a Technical Deployment Lead on the Claude Agentic Solutions team, you will lead the delivery of custom AI agent solutions for enterprise customers in highly regulated industries.</p>\n<p>This is a founding team: you will help us to build technical playbooks and define the processes and repeatable patterns needed for us to scale this emerging motion. You will champion our mission in the field, ensure world-class delivery, and bring insights back to our product and research teams on a regular basis.</p>\n<p>You&#39;ll own engagements end-to-end, from SOW through production deployment. You&#39;ll work alongside Forward Deployed Engineers who build the technical solution, while you own product scoping, stakeholder management, value measurement, and the organisational complexity that comes with deploying AI agents in enterprise environments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own the technical delivery plan for each engagement. Structure SOWs with clear scope, milestones, dependencies, success criteria, and value hypotheses. Translate customer business objectives into a sequenced roadmap that FDEs execute against.</li>\n</ul>\n<ul>\n<li>Lead technical discovery. Map customer workflows, identify constraints, define MVP scope, and shape the solution architecture for custom agent deployments.</li>\n</ul>\n<ul>\n<li>Run day-to-day engineering execution. Drive delivery across Anthropic and customer teams. Keep progress unblocked and sequenced. Make real-time trade-offs on scope and priority to protect the critical path.</li>\n</ul>\n<ul>\n<li>Own product scoping for field engagements. Define the MVP, author requirements documentation, prioritise the engineering backlog, and manage scope against success criteria as requirements evolve.</li>\n</ul>\n<ul>\n<li>Own the customer relationship throughout delivery. Lead executive briefings, manage stakeholder communications across technical leads and procurement, and represent Anthropic&#39;s technical credibility with senior business and engineering leaders.</li>\n</ul>\n<ul>\n<li>Own value measurement and ROI. Define impact hypotheses, set baselines and KPIs, run pre- and post-deployment measurement, and report outcomes to executive sponsors.</li>\n</ul>\n<ul>\n<li>Codify reusable delivery assets. Build solution patterns, evaluation frameworks, and technical playbooks. Extract what works across engagements and feed field signals back to Product and Research to improve our platform and models.</li>\n</ul>\n<ul>\n<li>Navigate enterprise and regulatory complexity. Security reviews, legal approvals, procurement processes, compliance requirements, and organisational dynamics.</li>\n</ul>\n<ul>\n<li>Manage scope and change. Handle evolving requirements, set expectations, negotiate contract modifications, identify risks early, and escalate with clear context when needed.</li>\n</ul>\n<ul>\n<li>Run delivery operations. Sprint ceremonies, milestone reviews, and progress reporting.</li>\n</ul>\n<ul>\n<li>Travel to customer sites. Build relationships, unblock delivery, and accelerate adoption. (25–50% expected).</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Have led AI/ML engagements/deployments, whether as a founder, data scientist, engineer, researcher, or in a professional services or consulting role.</li>\n</ul>\n<ul>\n<li>Have delivered AI, ML, or LLM-based agentic solutions into production. You understand solution patterns, integration approaches, and what breaks in real environments.</li>\n</ul>\n<ul>\n<li>Have experience in a specialised vertical (financial services, life sciences, pharmaceutical, retail, mining, agriculture, etc.).</li>\n</ul>\n<ul>\n<li>Can lead architecture discussions with engineering stakeholders, evaluate technical trade-offs, and pressure-test technical decisions. You won&#39;t write production code, but you will own the technical direction of engagements alongside FDEs.</li>\n</ul>\n<ul>\n<li>Have a track record delivering complex, high-stakes technical projects for enterprise clients where outcomes depended on tight coordination and fast decision-making , ideally across multiple workstreams in regulated industries.</li>\n</ul>\n<ul>\n<li>Have executive presence , polished, credible, and comfortable representing Anthropic to senior leaders in high-stakes environments.</li>\n</ul>\n<ul>\n<li>Thrive in ambiguity and bring structure where none exists.</li>\n</ul>\n<ul>\n<li>Have a builder&#39;s mindset , you&#39;re here to create a function, not join one.</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $200,000-$345,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ac82fe49-5bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5017903008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,000-$345,000 USD","x-skills-required":["AI","Machine Learning","Agentic Solutions","Enterprise Customers","Regulated Industries","Technical Playbooks","Solution Architecture","Customer Relationship Management","Value Measurement","ROI","Reusable Delivery Assets","Solution Patterns","Evaluation Frameworks","Enterprise and Regulatory Complexity","Security Reviews","Legal Approvals","Procurement Processes","Compliance Requirements","Organisational Dynamics","Scope and Change Management","Evolving Requirements","Contract Modifications","Risk Identification","Escalation","Delivery Operations","Sprint Ceremonies","Milestone Reviews","Progress Reporting","Travel to Customer Sites","Relationship Building","Delivery Acceleration"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:48.402Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin, TX; Boston, MA; New York City, NY; San Francisco, CA; Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, Machine Learning, Agentic Solutions, Enterprise Customers, Regulated Industries, Technical Playbooks, Solution Architecture, Customer Relationship Management, Value Measurement, ROI, Reusable Delivery Assets, Solution Patterns, Evaluation Frameworks, Enterprise and Regulatory Complexity, Security Reviews, Legal Approvals, Procurement Processes, Compliance Requirements, Organisational Dynamics, Scope and Change Management, Evolving Requirements, Contract Modifications, Risk Identification, Escalation, Delivery Operations, Sprint Ceremonies, Milestone Reviews, Progress Reporting, Travel to Customer Sites, Relationship Building, Delivery Acceleration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":345000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8f706224-663"},"title":"Specialist Solutions Architect - Cloud Infrastructure & Security","description":"<p>As a Specialist Solutions Architect (SSA) - Cloud Infrastructure &amp; Security, you will guide customers in the administration and security of their Databricks deployments.</p>\n<p>You will be in a customer-facing role, working with and supporting Solution Architects, which requires hands-on production experience with public cloud - AWS, Azure, and GCP.</p>\n<p>SSAs help customers with the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be cloud deployments, security, networking, or more.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to the successful administration of Databricks, ranging from design to deployment</li>\n</ul>\n<ul>\n<li>Architect production-level deployments, including meeting necessary security and networking requirements</li>\n</ul>\n<ul>\n<li>Become a technical expert in an area such as cloud platforms, automation, security, networking, or identity management</li>\n</ul>\n<ul>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content and custom architectures</li>\n</ul>\n<ul>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n</ul>\n<ul>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in a technical role with expertise in at least one of the following:</li>\n</ul>\n<ul>\n<li>Cloud Platforms &amp; Architecture: Cloud Native Architecture in CSPs such as AWS, Azure, and GCP, Serverless Architecture</li>\n</ul>\n<ul>\n<li>Security: Platform security, Network security, Data Security, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance</li>\n</ul>\n<ul>\n<li>Networking: Architecture design, implementation, and performance</li>\n</ul>\n<ul>\n<li>Identify management: Provisioning, SCIM, OAuth, SAML, Federation</li>\n</ul>\n<ul>\n<li>Platform Administration: High availability and disaster recovery, cluster management, observability, logging, monitoring, audit, cost management</li>\n</ul>\n<ul>\n<li>Infrastructure Automation and InfraOps with IaC tools like Terraform</li>\n</ul>\n<ul>\n<li>Maintain and extend the Databricks environment to adapt to evolving complex needs.</li>\n</ul>\n<ul>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<ul>\n<li>Security - understanding how to secure data platforms and manage identities</li>\n</ul>\n<ul>\n<li>Complex deployments</li>\n</ul>\n<ul>\n<li>Public Cloud experience - experience designing data platforms on cloud infrastructure and services, such as AWS, Azure, or GCP, using best practices in cloud security and networking.</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Python, Java, or Scala, and proficiency in SQL, and Terraform experience are desirable.</li>\n</ul>\n<ul>\n<li>2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures</li>\n</ul>\n<ul>\n<li>2 years of customer-facing experience in a pre-sales or post-sales role</li>\n</ul>\n<ul>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n</ul>\n<ul>\n<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed.</li>\n</ul>\n<p>Pay Range Transparency:</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>Zone 2 Pay Range $264,000-$363,000 USD</p>\n<p>Zone 3 Pay Range $264,000-$363,000 USD</p>\n<p>Zone 4 Pay Range $264,000-$363,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8f706224-663","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8477197002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$264,000-$363,000 USD","x-skills-required":["Cloud Platforms & Architecture","Security","Networking","Platform Administration","Infrastructure Automation and InfraOps","Big Data technologies","Cloud Native Architecture","Serverless Architecture","Gen AI & Model Security","Encryption","Vulnerability Management","Compliance","SCIM","OAuth","SAML","Federation","High availability and disaster recovery","Cluster management","Observability","Logging","Monitoring","Audit","Cost management","Terraform"],"x-skills-preferred":["Python","Java","Scala","SQL","Terraform experience"],"datePosted":"2026-04-18T15:56:46.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Central - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Platforms & Architecture, Security, Networking, Platform Administration, Infrastructure Automation and InfraOps, Big Data technologies, Cloud Native Architecture, Serverless Architecture, Gen AI & Model Security, Encryption, Vulnerability Management, Compliance, SCIM, OAuth, SAML, Federation, High availability and disaster recovery, Cluster management, Observability, Logging, Monitoring, Audit, Cost management, Terraform, Python, Java, Scala, SQL, Terraform experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":264000,"maxValue":363000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53ee0ef3-c62"},"title":"Staff Data Engineer, Analytics Data Engineering","description":"<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>\n<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>\n<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>\n</ul>\n<ul>\n<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>\n</ul>\n<ul>\n<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>\n</ul>\n<ul>\n<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>\n</ul>\n<ul>\n<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>\n</ul>\n<ul>\n<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>\n</ul>\n<ul>\n<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>\n</ul>\n<ul>\n<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>\n</ul>\n<ul>\n<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>\n</ul>\n<ul>\n<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>\n</ul>\n<ul>\n<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>\n</ul>\n<ul>\n<li>Experience leading orchestration or platform modernization efforts at scale</li>\n</ul>\n<ul>\n<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>\n</ul>\n<ul>\n<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>\n</ul>\n<ul>\n<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2 $198,900-$269,100 USD</p>\n<p>US Zone 3 $176,800-$239,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53ee0ef3-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7595183","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$198,900-$269,100 USD","x-skills-required":["SQL","Python","Dimensional data modeling","Schema design","Scalable data architecture","Orchestration tools","dbt"],"x-skills-preferred":["Databricks","Modern lakehouse architectures","Data governance and observability tools","Metrics/semantic layer"],"datePosted":"2026-04-18T15:56:35.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198900,"maxValue":269100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f24aa64a-8e9"},"title":"DevOps Engineer, GPS","description":"<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>\n<p>You will:</p>\n<ul>\n<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>\n</ul>\n<ul>\n<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>\n</ul>\n<ul>\n<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>\n</ul>\n<ul>\n<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>\n</ul>\n<ul>\n<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>\n</ul>\n<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f24aa64a-8e9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4613839005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Backend Development","System Ownership","Infrastructure Automation","Deployment and Scalability","Disaster Recovery and Hybrid Strategies","Cloud-Native AI Infrastructure","Terraform","CloudFormation","Kubernetes","Python","Typescript","Javascript","C++"],"x-skills-preferred":["Collaboration and Standards","Networking Architecture","CI/CD Pipelines","Containerized Applications","Orchestration Platforms","Data Abstraction","Data Pipelines","Identity & Access Management","Security Tools"],"datePosted":"2026-04-18T15:56:30.346Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Doha, Qatar"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity & Access Management, Security Tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b367ecde-fb8"},"title":"Software Engineer - Backend","description":"<p>We are seeking a skilled Software Engineer to join our team in Belgrade. As a founding member of our Belgrade site, you will be involved in the entire development cycle and help us achieve our Lakehouse vision. You will work on challenges such as distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. You will also build reliable, secure and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, GCS, Azure Blob Store. You will develop product features that empower our customers to easily store and access their data.</p>\n<p>Our backend teams span many domains across our essential service platforms. You will work on problems that span from product to infrastructure, including distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>\n<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 3+ years of production level experience in one of: Java, Scala, C++, or similar language. You will also need experience developing large-scale distributed systems, experience working on a SaaS platform or with Service-Oriented Architectures, and knowledge of SQL.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b367ecde-fb8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012650002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Large-scale distributed systems","Service-Oriented Architectures","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:21.604Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Large-scale distributed systems, Service-Oriented Architectures, SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_90423d85-ea7"},"title":"Senior Software Engineer - Fullstack","description":"<p>As a Full Stack software engineer, you will work with your team and product management to make insights from data simple. We are looking for engineers that are customer obsessed, who can take on the full scope of the product and user experience beyond the technical implementation. You&#39;ll set the foundation for how we build robust, scalable and delightful products.</p>\n<p>Some example experiences you&#39;ll create for our customers to achieve the full project lifecycle from loading data, visualizing results, creating statistical models, and deploying as production artifacts include:</p>\n<ul>\n<li>Simple workflows to create, configure, and manage large-scale compute clusters, networks and data sources.</li>\n<li>Create, deploy, test, and upgrade complex data pipelines with powerful features to visualize data graphs.</li>\n<li>Seamless onboarding and management for all members of an organisation to become data-driven.</li>\n<li>Provide a great SQL-centric data exploration and dashboarding experience on Databricks.</li>\n<li>An interactive environment for collaborative data projects at massive scale with an easy path to production.</li>\n</ul>\n<p>We are looking for engineers with 5+ years of experience with HTML, CSS, and JavaScript, passion for user experience and design, and a deep understanding of front-end architecture. You should be comfortable working towards a multi-year vision with incremental deliverables, motivated by delivering customer value, and experienced with modern JavaScript frameworks and server-side web technologies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_90423d85-ea7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/5445641002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["HTML","CSS","JavaScript","SQL","Cloud technologies (AWS, Azure, GCP, Docker, or Kubernetes)","Modern JavaScript frameworks (React, Angular, or VueJs/Ember)","Server-side web technologies (Node.js, Java, Python, Scala, C#, C++, Go)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:16.942Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"HTML, CSS, JavaScript, SQL, Cloud technologies (AWS, Azure, GCP, Docker, or Kubernetes), Modern JavaScript frameworks (React, Angular, or VueJs/Ember), Server-side web technologies (Node.js, Java, Python, Scala, C#, C++, Go)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4081a8b9-003"},"title":"Solutions Architect - Healthcare/Life Sciences Team (HLS)","description":"<p>We are looking for an experienced Solutions Architect to join our Healthcare/Life Sciences Team (HLS). As a Solutions Architect, you will work with the Enterprise Account Executive (AE) to define and direct the technical strategy for our largest and important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>\n<p>You will lean upon your solid background in value selling, technical account management and technical leadership to maximise success in these accounts. While you work with a team that includes hands-on resources who will build proofs of concept and demonstrate Databricks&#39; products, you need to be technical and must understand the relevance and application of ML &amp; AI within a range of use cases important to the target accounts in the Healthcare &amp; Life Sciences (HLS) space.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You work with multiple clients as the main technical voice for Databricks.</li>\n<li>You lead your customers on a transformational journey, helping them to evaluate and adopt Databricks as part of their strategy.</li>\n<li>You implement the technical strategy in the account, in close understanding of the strategy.</li>\n<li>You build a movement of technical champions within the account.</li>\n<li>You align technical strategies around Databricks solutions.</li>\n<li>You provide structured mentorship for other team members.</li>\n<li>Gain the respect of your peers based on your experience, insight, and contributions.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Proficiency at establishing virtual teams, and leading them to ultimate success within the account.</li>\n<li>Experience working very large (&gt; $1m ARR), global accounts.</li>\n<li>Form relationships with executives and influencers.</li>\n<li>Present a convincing point-of-view to important decision-makers that leads them down a path of success.</li>\n<li>Technical in big data, data science and cloud.</li>\n<li>An ability in data-driven business transformation, and driving change with data.</li>\n<li>Programming experience in Python, SQL or Scala.</li>\n<li>Can travel up to 35% when needed.</li>\n<li>Bachelor Degree in computer science or related field.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4081a8b9-003","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8085877002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["big data","data science","cloud","Python","SQL","Scala","ML & AI","value selling","technical account management","technical leadership"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:14.658Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud, Python, SQL, Scala, ML & AI, value selling, technical account management, technical leadership","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f962d3f-14e"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f962d3f-14e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461218002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems","Apache Spark","CI/CD","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:09.899Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dallas, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b5a4347-f37"},"title":"Sr. Machine Learning Engineer, Monetization Engineering","description":"<p>About this role:</p>\n<p>We&#39;re looking for a Senior Machine Learning Engineer to join our Monetization team. As a key member of the team, you will be responsible for developing and executing a vision for the evolution of the machine learning technology stack within Ads.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest</li>\n<li>Partnering closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search)</li>\n<li>Using data-driven methods and leveraging the unique properties of our data to improve candidate retrieval</li>\n<li>Working in a high-impact environment with quick experimentation and product launches</li>\n<li>Keeping up with industry trends in recommendation systems</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>2+ years of industry experience applying machine learning methods</li>\n<li>Degree in computer science, statistics, or related field; or equivalent experience</li>\n<li>End-to-end hands-on experience with building data processing pipelines, large-scale machine learning systems, and big data technologies</li>\n<li>Practical knowledge of large-scale recommender systems, or modern ads ranking, retrieval, targeting, marketplace systems</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>M.S. or PhD in Machine Learning or related areas</li>\n<li>Publications at top ML conferences</li>\n<li>Experience using Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring</li>\n<li>Familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration</li>\n<li>Expertise in scalable real-time systems that process stream data</li>\n<li>Passion for applied ML and the Pinterest product</li>\n<li>Background in computational advertising</li>\n</ul>\n<p>Relocation Statement:</p>\n<p>This position is not eligible for relocation assistance.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b5a4347-f37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/6121551","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$189,721-$332,012 USD","x-skills-required":["Machine Learning","Deep Learning","Data Processing Pipelines","Large-Scale Machine Learning Systems","Big Data Technologies","Recommender Systems","Ads Ranking","Retrieval","Targeting","Marketplace Systems"],"x-skills-preferred":["M.S. or PhD in Machine Learning or related areas","Publications at top ML conferences","Experience using Cursor, Copilot, Codex, or similar AI coding assistants","Familiarity with LLM-powered productivity tools","Expertise in scalable real-time systems","Passion for applied ML and the Pinterest product","Background in computational advertising"],"datePosted":"2026-04-18T15:56:06.423Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Deep Learning, Data Processing Pipelines, Large-Scale Machine Learning Systems, Big Data Technologies, Recommender Systems, Ads Ranking, Retrieval, Targeting, Marketplace Systems, M.S. or PhD in Machine Learning or related areas, Publications at top ML conferences, Experience using Cursor, Copilot, Codex, or similar AI coding assistants, Familiarity with LLM-powered productivity tools, Expertise in scalable real-time systems, Passion for applied ML and the Pinterest product, Background in computational advertising","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189721,"maxValue":332012,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_314ed80b-9f4"},"title":"Staff Mobile Software Engineer, iOS","description":"<p>At Gusto, we&#39;re on a mission to grow the small business economy. We handle the hard stuff , payroll, health insurance, 401(k)s, and HR , so owners can focus on their craft and their customers. With teams in Denver, San Francisco, and New York, we support more than 400,000 small businesses nationwide and are building a workplace that reflects the people we serve.</p>\n<p>We&#39;re on the lookout for talented Mobile iOS Engineers who are passionate about creating amazing user experiences. In this role, you&#39;ll have the chance to dive into developing features that are not just functional but truly intuitive and enjoyable for our users. We want you to take ownership of scaling our shared services while keeping up with the latest in the iOS world. You&#39;ll work closely with teams across Product, Identity, Security, System Design, and Infrastructure, collaborating to build solutions that really make a difference.</p>\n<p>If you&#39;re excited about building elegant, scalable apps and being part of a supportive, innovative mobile team, we&#39;d love to hear from you!</p>\n<p>About the Team:</p>\n<p>At Gusto, we&#39;re excited to be at a pivotal moment in our journey with over 1M+ monthly active users and the recent launch of our employer experiences on mobile. Our mission is to tackle the real challenges faced by small business owners and their employees, and we believe that mobile is key to delivering impactful solutions. As part of our mobile team, you&#39;ll be at the forefront of this transformation, working alongside talented engineers and designers who are passionate about creating a seamless mobile experience. Together, we&#39;re building a platform that empowers everyone,small business owners and their teams,to thrive.</p>\n<p>Here&#39;s what you&#39;ll do day-to-day:</p>\n<ul>\n<li>Architect, build, test, and refine Gusto&#39;s native iOS app, along with supporting mobile web views that enhance user experience.</li>\n<li>Develop, iterate, and improve product features that integrate core business functions, work tools, value-added services, and financial products.</li>\n<li>Collaborate closely with our product management, design, and partner teams to identify technical and customer pain points, brainstorm solutions, and then prototype, iterate, and launch those solutions.</li>\n<li>Work cross-functionally with teams in product apps, identity, security, design systems, and infrastructure to deliver world-class experiences right into our customers&#39; hands.</li>\n<li>Build and scale essential services, such as push notification systems and localization features, to enhance app functionality.</li>\n<li>Enhance and maintain our iOS mobile infrastructure, including build pipelines, testing automation, and the release process, to ensure smooth operations.</li>\n</ul>\n<p>Here&#39;s what we&#39;re looking for:</p>\n<ul>\n<li>A minimum of 6 years of experience in Swift and iOS software engineering, with at least 8 years in mobile software engineering overall.</li>\n<li>Excellent communication skills and a knack for building strong cross-functional partnerships.</li>\n<li>A self-driven mindset with the ability to tackle greenfield projects and bring innovative ideas to life.</li>\n<li>Proficiency in iOS testing frameworks and a solid understanding of best practices.</li>\n<li>Experience in developing platform components and common features that enhance scalability, consistency, and maintainability throughout the product development lifecycle.</li>\n<li>Strong critical thinking abilities and a keen attention to detail.</li>\n<li>A willingness to learn continuously and a passion for mentoring others on the team.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_314ed80b-9f4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gusto","sameAs":"https://www.gusto.com/","logo":"https://logos.yubhub.co/gusto.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gusto/jobs/7623124","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$197,000/yr to $235,000/yr","x-skills-required":["Swift","iOS software engineering","Mobile software engineering","iOS testing frameworks","Best practices","Platform components","Common features","Scalability","Consistency","Maintainability"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:58.649Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA;New York, NY;Toronto, Ontario, CAN - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Swift, iOS software engineering, Mobile software engineering, iOS testing frameworks, Best practices, Platform components, Common features, Scalability, Consistency, Maintainability","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":197000,"maxValue":235000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0036f074-845"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0036f074-845","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8456966002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","design and deployment of highly performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:41.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_701f91f2-211"},"title":"Senior Staff Software Engineer (Backend)","description":"<p>We are hiring a Senior Staff Software Engineer for Databricks&#39; Engineering team reporting to an Engineering Leader. You will be part of the Databricks engineering organization, working with teams that develop Databricks products and features for thousands of enterprises worldwide.</p>\n<p>As an executive engineering individual contributor at Databricks, you will have full ownership of the product or infrastructure direction in a major area, driving it from initial development to scalable solutions with clear business impact. You will serve as a force multiplier by elevating stability, reliability, and organizational processes, while bringing deep expertise in large-scale distributed systems. You will mentor senior engineers, contribute to recruiting, and lead high-impact company projects, often tackling complex problems beyond their comfort zone.</p>\n<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high value challenges that are central to their own missions.</p>\n<p>Our engineering teams build highly technical products that fulfill real, important needs in the world. We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>\n<p>The Impact you will have:</p>\n<ul>\n<li>Solve real business needs at large scale by applying your software engineering.</li>\n<li>Deliver a highly scalable, available, and fault-tolerant engine processing hundreds of TB of data daily across thousands of customers</li>\n<li>Low level systems debugging, performance measurement &amp; optimization on large production clusters.</li>\n<li>Build architecture design, influence product roadmap, and take ownership and responsibility over new projects</li>\n<li>Use your deep experience to help prevent and investigate production issues.</li>\n<li>Plan and lead complicated technical projects that work with several teams within the company.</li>\n<li>A strong influencer or driver in the organization’s roadmap and direction.</li>\n<li>Lead a TLG or similar review committee, or initiate and sustain an org/eng-wide initiative driven by engineering needs.</li>\n<li>Break down complex problems quickly into potential solutions, knowns, and unknowns, and de-risk (through prototyping/validation).</li>\n<li>Contribute as a Technical Team Lead by mentoring others, lead sprint planning, delegating work and assignments to team members and participate in project planning.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>15+ years industry experience building and supporting large-scale distributed systems.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Motivated by delivering customer value and impact.</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>\n<li>Experience driving company initiatives towards customer satisfaction.</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_701f91f2-211","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7651345002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["large-scale distributed systems","software engineering","scalable solutions","algorithms","data structures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:41.296Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale distributed systems, software engineering, scalable solutions, algorithms, data structures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_de5de1b0-8da"},"title":"Software Engineer","description":"<p>We&#39;re looking for a skilled Software Engineer to join our team. As a Software Engineer at Stripe, you will build and manage user-facing products and platforms to lower our financial and regulatory risk while retaining a best-in-class user experience.</p>\n<p>Your responsibilities will include translating business problems into scalable engineering solutions, collaborating with stakeholders from engineering, product, policy, legal, operations, and data science teams, and improving engineering standards, tooling, and processes.</p>\n<p>You will have the opportunity to work on a wide range of projects, from building new features to debugging production issues across services and multiple levels of the stack.</p>\n<p>To be successful in this role, you will need to have a strong understanding of software engineering principles and practices, as well as excellent communication and collaboration skills.</p>\n<p>We offer a competitive salary range of $213,512 - $285,600/yr, as well as additional benefits such as equity, company bonus or sales commissions/bonuses, 401(k) plan, medical, dental, and vision benefits, and wellness stipends.</p>\n<p>If you&#39;re passionate about building scalable and reliable software systems, and enjoy working in a fast-paced and dynamic environment, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_de5de1b0-8da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe, LLC.","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7808471","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$213,512 - $285,600/yr","x-skills-required":["Python","Ruby","Java","Distributed systems","Scalable systems","Software engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:40.574Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Ruby, Java, Distributed systems, Scalable systems, Software engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":213512,"maxValue":285600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8bdbbf2b-ec6"},"title":"Staff GenAI Backend Engineer, Automation Foundation","description":"<p>As a staff software engineer on the Automation Foundation team, you will play a critical role in providing world-class customer service for Airbnb&#39;s community of guests and hosts. You will lead two key areas: the Automation Platform, a large-scale conversational AI platform, and automation provisioning for internal human agents and AI agents. You will collaborate with cross-functional teams to drive success metrics such as NPS, CSAT, and time-to-resolution.</p>\n<p>A typical day will involve collaborating with product, design, engineering, and data science teams to develop backend systems and enhance AI prompt effectiveness. You will drive the technical vision and strategy for workflow and backend optimization, leading and contributing to the full development cycle: technical design, implementation, testing, experimentation, and deployment.</p>\n<p>We are looking for a seasoned software engineer with 9+ years of experience in service-oriented architectures and backend development. You should have expertise in workflow optimization and backend systems, with a focus on scalable and flexible architecture. You should also be proficient in crafting backend systems focusing on technical quality, efficiency, and resilience.</p>\n<p>In addition to your technical expertise, you should have excellent collaboration and communication skills to work effectively across teams and domains. You should be passionate about agile development, system optimization, and team productivity enhancements.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8bdbbf2b-ec6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7463421","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["service-oriented architectures","backend development","workflow optimization","backend systems","scalable and flexible architecture","technical quality","efficiency","resilience"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:26.210Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"service-oriented architectures, backend development, workflow optimization, backend systems, scalable and flexible architecture, technical quality, efficiency, resilience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a7cad02-cd5"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a7cad02-cd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494155002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:20.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Philadelphia, Pennsylvania"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b9b8ad5-920"},"title":"Senior Software Engineer, Full Stack","description":"<p>Join us</p>\n<p>As a Senior Full-Stack Software Engineer on our Task Workflows Platform team, you will build and scale the foundational infrastructure that powers core experiences across Brex. You will work across our extensive suite of platforms - including our multi-channel notifications engine, collaborative commenting system, and dynamic workflow rule builder.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Work with engineers across the company to build new features and products end-to-end</li>\n<li>Own problems end-to-end, thinking through everything from user experience, data models, scalability, operability and ongoing metrics</li>\n<li>Provide technical leadership to team by driving roadmap direction, architectural design and mentorship</li>\n<li>Work side-by-side with user-facing teams (Sales, Support) to best understand the needs of our customers</li>\n<li>Tune and polish features to a high degree of excellence</li>\n<li>Identify and implement reliability and performance improvements</li>\n<li>Uphold our high engineering standards and bring consistency to the codebases, infrastructure, and processes you will encounter</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>7+ years of professional experience designing, developing, and deploying full-stack products</li>\n<li>Strong proficiency in backend programming languages (Java, Kotlin, Python)</li>\n<li>Experience architecting, building, and maintaining scalable, high-availability distributed systems</li>\n<li>Experience designing and optimizing SQL and/or NoSQL databases, including data modeling, query performance tuning, and schema design</li>\n<li>Experience building and maintaining RESTful APIs and/or GraphQL services</li>\n<li>Experience with modern frontend technologies and tools like ES6+, React, TypeScript, and Webpack</li>\n</ul>\n<p>Bonus points</p>\n<ul>\n<li>Experience collaborating with cross-functional stakeholders and specialists in product, design, and operations</li>\n<li>Experience driving initiatives at a broader level across an organization or company</li>\n<li>Experience running in-product experiments, A/B testing</li>\n<li>Experience and interest in leveraging AI developer tools</li>\n</ul>\n<p>Compensation</p>\n<p>The expected salary range for this role is $192,000 - $240,000. However, the starting base pay will depend on a number of factors, including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b9b8ad5-920","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8472634002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000 - $240,000","x-skills-required":["backend programming languages (Java, Kotlin, Python)","architecting, building, and maintaining scalable, high-availability distributed systems","designing and optimizing SQL and/or NoSQL databases","building and maintaining RESTful APIs and/or GraphQL services","modern frontend technologies and tools like ES6+, React, TypeScript, and Webpack"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:19.892Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend programming languages (Java, Kotlin, Python), architecting, building, and maintaining scalable, high-availability distributed systems, designing and optimizing SQL and/or NoSQL databases, building and maintaining RESTful APIs and/or GraphQL services, modern frontend technologies and tools like ES6+, React, TypeScript, and Webpack","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e68e5c3b-1e2"},"title":"Lakebase Account Executive","description":"<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>\n<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>\n<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>\n<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>\n<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>\n<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>\n<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>\n<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>\n<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>\n<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>\n<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>\n<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>\n<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>\n<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>\n<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>\n<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>\n<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>\n<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>\n<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>\n<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>\n<p>Bachelor’s degree or equivalent practical experience.</p>\n<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>\n<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>\n<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>\n<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>\n<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>\n<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e68e5c3b-1e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8449848002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","operational databases","OLTP workloads","transactional cloud database services","data platforms","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics","AI-native applications","agent-driven applications","low-latency","highly scalable operational data services"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:06.106Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccb5daf2-354"},"title":"Sr. ML Ops Engineer, tvScientific","description":"<p>We&#39;re looking for a Senior MLOps Engineer to join our distributed engineering team on our Connected TV ad-buying platform. As a Senior MLOps Engineer, you will be responsible for scaling the decision-making process for tools for the tvScientific AI team, improving the developer experience for the data science team, upgrading our observability tooling, serving as a technical lead and mentor to the team, and making every deployment smooth as our infrastructure evolves.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Scaling the decision-making process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments</li>\n<li>Improving the developer experience for the data science team</li>\n<li>Upgrading our observability tooling</li>\n<li>Serving as a technical lead and mentor to the team</li>\n<li>Making every deployment smooth as our infrastructure evolves</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Deep understanding of Linux</li>\n<li>Excellent writing skills</li>\n<li>A systems-oriented mindset</li>\n<li>Experience in high-performance software (RTB, HFT, etc.)</li>\n<li>Software engineering experience + reliability (e.g. CI/CD) expertise</li>\n<li>Strong observability instincts</li>\n<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>\n<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>\n<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>\n</ul>\n<p>Nice-to-haves include:</p>\n<ul>\n<li>Reverse-engineering experience</li>\n<li>Terraform, EKS, or MLOps experience</li>\n<li>Python, Scala, or Zig experience</li>\n<li>NixOS experience</li>\n<li>Adtech or CTV experience</li>\n<li>Experience deploying a distributed system across multiple clouds</li>\n<li>Experience in hard real-time low-latency</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccb5daf2-354","directApply":true,"hiringOrganization":{"@type":"Organization","name":"tvScientific","sameAs":"https://www.tvscientific.com/","logo":"https://logos.yubhub.co/tvscientific.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7642249","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155,584-$320,320 USD","x-skills-required":["Linux","writing skills","systems-oriented mindset","high-performance software","software engineering","reliability","observability","AI","critical evaluation","verification","data protection","data validation","peer review"],"x-skills-preferred":["reverse-engineering","Terraform","EKS","MLOps","Python","Scala","Zig","NixOS","adtech","CTV","distributed system","hard real-time low-latency"],"datePosted":"2026-04-18T15:55:03.102Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, writing skills, systems-oriented mindset, high-performance software, software engineering, reliability, observability, AI, critical evaluation, verification, data protection, data validation, peer review, reverse-engineering, Terraform, EKS, MLOps, Python, Scala, Zig, NixOS, adtech, CTV, distributed system, hard real-time low-latency","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155584,"maxValue":320320,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b29d013-412"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. As a senior software engineer on the Runtime team, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Some example projects include: Apache Spark: Develop the de facto open source standard framework for big data. Data Plane Storage: Provide reliable and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming. Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines. Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning free, scalable, and robust.</p>\n<p>We look for: BS (or higher) in Computer Science, related technical field or equivalent practical experience. Comfortable working towards a multi-year vision with incremental deliverables. Motivated by delivering customer value and impact. 5+ years of production level experience in either Java, Scala or C++. Strong foundation in algorithms and data structures and their real-world use cases. Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</p>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Local Pay Range $166,000-$225,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b29d013-412","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/4513122002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:01.767Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a47dd78-a8f"},"title":"Solutions Architect, Applied AI (Beneficial Deployments)","description":"<p>As a Solutions Architect, Applied AI at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping large enterprises understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack.</p>\n<p>You will combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex business challenges while maintaining our high standards for safety and reliability.</p>\n<p>Working closely with our Sales, Product, and Engineering teams, you&#39;ll guide customers from initial technical discovery through successful deployment. You&#39;ll leverage your expertise to help customers understand Claude&#39;s capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with account executives across India to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation</li>\n</ul>\n<ul>\n<li>Serve as the primary technical advisor to enterprise customers across India throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams &amp; stakeholders to drive customer success</li>\n</ul>\n<ul>\n<li>Support customers building with both the Claude API and Claude for Work</li>\n</ul>\n<ul>\n<li>Create and deliver compelling technical content tailored to different audiences across India. You will need to be able to spread the gamut from technical deep dives for engineering &amp; development teams up to business value focused conversations with executives</li>\n</ul>\n<ul>\n<li>Guide technical architecture decisions and help customers across India integrate Claude effectively into their existing technology stack</li>\n</ul>\n<ul>\n<li>Help customers develop evaluation frameworks to measure Claude&#39;s performance for their specific use cases</li>\n</ul>\n<ul>\n<li>Identify common integration patterns and contribute insights back to our Product and Engineering teams</li>\n</ul>\n<ul>\n<li>Travel occasionally to customer sites for workshops, technical deep dives, and relationship building</li>\n</ul>\n<ul>\n<li>Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>7+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager</li>\n</ul>\n<ul>\n<li>Experience working with enterprise customers, navigating complex buying cycles involving multiple stakeholders</li>\n</ul>\n<ul>\n<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>\n</ul>\n<ul>\n<li>Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders</li>\n</ul>\n<ul>\n<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>\n</ul>\n<ul>\n<li>Comfortable with python</li>\n</ul>\n<ul>\n<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>\n</ul>\n<ul>\n<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>A love of teaching, mentoring, and helping others succeed</li>\n</ul>\n<ul>\n<li>Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>\n</ul>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p>How we&#39;re different:</p>\n<ul>\n<li>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles.</li>\n</ul>\n<ul>\n<li>We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science.</li>\n</ul>\n<ul>\n<li>We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time.</li>\n</ul>\n<ul>\n<li>As such, we greatly value communication skills.</li>\n</ul>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come work with us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a47dd78-a8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5146028008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Technical customer-facing experience","Experience working with enterprise customers","Ability to build relationships with diverse stakeholders","Strong technical communication skills","Experience designing scalable cloud architectures","Comfortable with Python","Familiarity with common LLM frameworks and tools"],"x-skills-preferred":["Machine learning or data science background","Experience with AI research","Collaboration and teamwork skills","Communication and interpersonal skills"],"datePosted":"2026-04-18T15:54:54.122Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical customer-facing experience, Experience working with enterprise customers, Ability to build relationships with diverse stakeholders, Strong technical communication skills, Experience designing scalable cloud architectures, Comfortable with Python, Familiarity with common LLM frameworks and tools, Machine learning or data science background, Experience with AI research, Collaboration and teamwork skills, Communication and interpersonal skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bbbd3f3a-5fe"},"title":"Solutions Architect (Pre-sales) - Digital Native","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the Digital Native field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our The Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem. You&#39;ll grow as a leader in your field, while finding solutions to our customers&#39; biggest challenges in big data, analytics, data engineering and data science problems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others.</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese and English</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bbbd3f3a-5fe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437026002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Public Cloud","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra"],"x-skills-preferred":["Python","Scala","Java","R"],"datePosted":"2026-04-18T15:54:50.098Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Public Cloud, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5f66d426-bea"},"title":"Principal Software Engineer, Corporate AI","description":"<p>The Principal Software Engineer is a highly skilled expert responsible for shaping and executing the organization&#39;s intelligence vision. This role integrates expertise in Artificial Intelligence (AI), Machine Learning (ML), Automation, Data Analytics and Visualization to deliver transformative customer, partner, and colleague experiences that drive revenue growth and enhance productivity.</p>\n<p>The position defines the technical direction for intelligence initiatives, leading the design, development, and deployment of robust, scalable, and secure AI solutions while fostering innovation through emerging technologies.</p>\n<p>A critical aspect of the role is providing partnership, mentorship and technical guidance, cultivating a culture of excellence and continuous learning. Through close cross-functional collaboration across teams and stakeholders, the role ensures technical efforts are strategically aligned and deliver measurable impact.</p>\n<p>Additionally, the position plays a central role in strategic problem-solving, addressing complex challenges in intelligence systems and data pipelines, and making informed architectural decisions that ensure long-term scalability and success.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Define, drive, and communicate the technical vision for intelligence, AI, and data initiatives, ensuring alignment with CIT strategy, EPD goals, and broader organisational objectives.</li>\n</ul>\n<ul>\n<li>Take a holistic view of CIT systems and architecture to ensure they are scalable, reliable, secure, and maintainable over multiple years.</li>\n</ul>\n<ul>\n<li>Lead the design, development, and deployment of high-performance AI systems, data pipelines, and intelligent services from conception through production.</li>\n</ul>\n<ul>\n<li>Make strategic architectural decisions to address complex AI, data, and platform challenges, balancing short-term delivery with long-term resilience and scalability.</li>\n</ul>\n<ul>\n<li>Identify opportunities to simplify systems, reduce operational and security risk, and improve developer productivity.</li>\n</ul>\n<ul>\n<li>Contribute directly to prototyping, proof of concepts, and implementation of technical components when needed to validate strategy, de-risk decisions, or accelerate progress.</li>\n</ul>\n<ul>\n<li>Architect, evolve, and scale AI, automation, and intelligence platforms that enable advanced analytics, personalisation, search, and intelligent decision-making.</li>\n</ul>\n<ul>\n<li>Drive innovation in intelligence models, distributed training, optimisation techniques, and data engineering to maximise performance, quality, and business impact.</li>\n</ul>\n<ul>\n<li>Enhance search and discovery capabilities using intelligent algorithms, natural language processing, and modern data systems.</li>\n</ul>\n<ul>\n<li>Evaluate, select, and integrate emerging technologies in AI, ML, and automation to maintain a competitive and forward-looking technical posture.</li>\n</ul>\n<ul>\n<li>Partner across engineering, product, design, infrastructure, and other stakeholders to ensure intelligence initiatives directly support strategic objectives.</li>\n</ul>\n<ul>\n<li>Translate technical capabilities and advancements into clear business outcomes that improve productivity, efficiency, and growth.</li>\n</ul>\n<ul>\n<li>Resolve conflicting requirements and priorities with sound technical judgment that favours long-term organisational outcomes over local optimisation.</li>\n</ul>\n<ul>\n<li>Advocate for intelligence-driven solutions across the organisation and influence company-wide technical priorities.</li>\n</ul>\n<ul>\n<li>Act as a trusted technical advisor to senior engineering leadership, with IC6 scope extending to org-wide and EPD-level strategy.</li>\n</ul>\n<ul>\n<li>Provide mentorship and technical guidance to engineers and data scientists from mid-level through senior, fostering continuous learning and technical excellence.</li>\n</ul>\n<ul>\n<li>Serve as a technical multiplier by raising the effectiveness of surrounding teams through design reviews, code reviews, architectural guidance, and pragmatic execution.</li>\n</ul>\n<ul>\n<li>Facilitate knowledge sharing across teams through documentation, design write-ups, technical discussions, and mentorship programs.</li>\n</ul>\n<ul>\n<li>Act as a voice for engineers by synthesising feedback, surfacing gaps and risks, and communicating them clearly to leadership.</li>\n</ul>\n<ul>\n<li>Contribute to multi-year technical vision and roadmap planning, anticipating future scale, complexity, and organisational needs.</li>\n</ul>\n<ul>\n<li>Identify architectural, operational, and security risks early and mobilise proactive mitigation plans across org boundaries.</li>\n</ul>\n<ul>\n<li>Partner closely with managers, product leaders, and senior engineers to ensure ambitious initiatives remain feasible, sustainable, and well-aligned.</li>\n</ul>\n<ul>\n<li>For IC6 scope, influence technical direction beyond CIT and partner directly with senior EPD leadership on company-wide strategy.</li>\n</ul>\n<ul>\n<li>Lead and support critical, high-impact initiatives by defining technical direction, clarifying requirements, gathering estimates, and ensuring delivery against milestones.</li>\n</ul>\n<ul>\n<li>Drive execution on complex projects with significant ambiguity or high cost of failure.</li>\n</ul>\n<ul>\n<li>Improve engineering effectiveness by championing best practices such as CI, automated testing, reliability reviews, and clear ownership models.</li>\n</ul>\n<ul>\n<li>Promote a bias toward action, thoughtful experimentation, and continuous learning.</li>\n</ul>\n<ul>\n<li>Model excellence in engineering craft, collaboration, accountability, and inclusive behaviour.</li>\n</ul>\n<ul>\n<li>Lead by example in living Dropbox values, including integrity, ownership, simplicity, and inclusivity.</li>\n</ul>\n<ul>\n<li>Support hiring by interviewing, calibrating candidates against a high technical bar, and representing Dropbox authentically to candidates and partners.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>12+ years of professional experience in software engineering, with depth in areas such as intelligent workflows, enterprise-scale AI adoption, automation, or data engineering.</li>\n</ul>\n<ul>\n<li>Proven track record of leading large-scale, multi-team technical initiatives from conception to production, including solving ambiguous problems, setting technical vision, and driving impact without direct authority.</li>\n</ul>\n<ul>\n<li>Strong architectural judgment and systems thinking, with the ability to balance short-term delivery with long-term sustainability, scalability, and operational excellence.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to influence across teams and disciplines through technical leadership, collaboration, and sound decision-making rather than formal authority.</li>\n</ul>\n<ul>\n<li>Experience mentoring engineers and raising the technical bar of an organisation through design reviews, code reviews, and technical guidance.</li>\n</ul>\n<ul>\n<li>Exceptional written and verbal communication skills, with the ability to clearly explain complex technical concepts, translate technical strategy to diverse audiences, and influence stakeholders at all levels.</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Strong coding ability in at least one language commonly used in AI and data systems such as Python, Java, Go, or Scala, with hands-on experience building models, data pipelines, or scalable production services.</li>\n</ul>\n<ul>\n<li>Experience operating in platform, infrastructure, or internal tooling organisations, including leading or significantly influencing org-wide technical initiatives.</li>\n</ul>\n<ul>\n<li>Proven ability to navigate ambiguity and competing priorities, drive clarity, and make sound technical and product trade-offs in partnership with product managers.</li>\n</ul>\n<ul>\n<li>Experience collaborating cross-functionally with product, design, infrastructure, and legal or privacy stakeholders to deliver AI-powered or data-intensive products responsibly.</li>\n</ul>\n<ul>\n<li>Familiarity with AI-assisted development practices in large codebases, along with experience representing engineering externally through talks, blogs, or industry events when applicable.</li>\n</ul>\n<p><strong>Compensation:</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5f66d426-bea","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7537004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Artificial Intelligence","Machine Learning","Automation","Data Analytics","Visualization","Python","Java","Go","Scala","Cloud Storage","File-Sharing","Software Engineering","Intelligent Workflows","Enterprise-Scale AI Adoption","Data Engineering"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:49.866Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Canada: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Artificial Intelligence, Machine Learning, Automation, Data Analytics, Visualization, Python, Java, Go, Scala, Cloud Storage, File-Sharing, Software Engineering, Intelligent Workflows, Enterprise-Scale AI Adoption, Data Engineering"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a03720f6-bc3"},"title":"Solutions Architect","description":"<p>As a Solutions Architect at Databricks, you will partner with our customers to design scalable data architectures using Databricks technology and services.</p>\n<p>You have technical depth and business knowledge and can drive complex technology discussions which express the value of the Databricks platform throughout the sales lifecycle.</p>\n<p>In partnership with our Account Executives, you will engage with our customers&#39; technical leads, including architects, engineers, and operations teams with the goal of establishing yourself as a trusted advisor to achieve tangible outcomes.</p>\n<p>You will work with teams across Databricks and our executive leadership to represent your customer&#39;s needs and build valuable customer engagements and report to the Field Engineering Manager.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work with Sales and other essential partners to develop account strategies for your assigned accounts to grow their usage of the platform.</li>\n</ul>\n<ul>\n<li>Establish the Databricks Lakehouse architecture as the standard data architecture for customers through excellent technical account planning.</li>\n</ul>\n<ul>\n<li>Build and present reference architectures and demo applications for prospects to help them understand how Databricks can be used to achieve their goals to land new users and use cases.</li>\n</ul>\n<ul>\n<li>Capture the technical win by consulting on big data architectures, data engineering pipelines, and data science/machine learning projects; prove out the Databricks technology for strategic customer projects; and validate integrations with cloud services and other 3rd party applications.</li>\n</ul>\n<ul>\n<li>Become an expert in, and promote Databricks inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years in a customer-facing pre-sales, technical architecture, or consulting role with expertise in at least one of the following technologies:</li>\n</ul>\n<ul>\n<li>Big data engineering (Ex: Spark, Hadoop, Kafka)</li>\n</ul>\n<ul>\n<li>Data Warehousing &amp; ETL (Ex: SQL, OLTP/OLAP/DSS)</li>\n</ul>\n<ul>\n<li>Data Science and Machine Learning (Ex: pandas, scikit-learn, HPO)</li>\n</ul>\n<ul>\n<li>Data Applications (Ex: Logs Analysis, Threat Detection, Real-time Systems Monitoring, Risk Analysis and more)</li>\n</ul>\n<ul>\n<li>Experience translating a customer&#39;s business needs to technology solutions, including establishing buy-in with essential customer stakeholders at all levels of the business.</li>\n</ul>\n<ul>\n<li>Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures.</li>\n</ul>\n<ul>\n<li>Fluent in SQL and database technology.</li>\n</ul>\n<ul>\n<li>Debug and development experience in at least one of the following languages: Python, Scala, Java, or R.</li>\n</ul>\n<ul>\n<li>Desired: Built solutions with public cloud providers such as AWS, Azure, or GCP</li>\n</ul>\n<ul>\n<li>Desired: Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</li>\n</ul>\n<ul>\n<li>Travel to customers in your region up to 30% of the time.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a03720f6-bc3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/5898477002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$164,500-$224,000 CAD","x-skills-required":["Big data engineering","Data Warehousing & ETL","Data Science and Machine Learning","Data Applications","SQL and database technology","Python, Scala, Java, or R"],"x-skills-preferred":["Built solutions with public cloud providers such as AWS, Azure, or GCP","Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)"],"datePosted":"2026-04-18T15:54:41.801Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big data engineering, Data Warehousing & ETL, Data Science and Machine Learning, Data Applications, SQL and database technology, Python, Scala, Java, or R, Built solutions with public cloud providers such as AWS, Azure, or GCP, Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":164500,"maxValue":224000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cba88898-896"},"title":"Research Engineer, Infrastructure, Kernels","description":"<p>We&#39;re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.</p>\n<p>This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You&#39;ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You&#39;ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.</li>\n<li>Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.</li>\n<li>Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.</li>\n<li>Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.</li>\n<li>Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.</li>\n<li>Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.</li>\n</ul>\n<p><strong>Skills and Qualifications</strong></p>\n<p>Minimum qualifications:</p>\n<ul>\n<li>Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.</li>\n<li>Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases</li>\n<li>Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</li>\n<li>Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.</li>\n<li>A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.</li>\n<li>Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.</li>\n<li>Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience training or supporting large-scale language models with tens of billions of parameters or more.</li>\n<li>Track record of improving research productivity through infrastructure design or process improvements.</li>\n<li>Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.</li>\n<li>Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.</li>\n<li>Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).</li>\n<li>Contributions to open-source GPU, ML systems, or compiler optimization projects.</li>\n<li>Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cba88898-896","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013934008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["CUDA","CuTe","Triton","GPU programming frameworks","Deep learning frameworks (e.g., PyTorch, JAX)","Computer science","Electrical engineering","Statistics","Machine learning","Physics","Robotics"],"x-skills-preferred":["Experience training or supporting large-scale language models with tens of billions of parameters or more","Track record of improving research productivity through infrastructure design or process improvements","Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators","Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks","Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM)","Contributions to open-source GPU, ML systems, or compiler optimization projects","Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure"],"datePosted":"2026-04-18T15:54:38.498Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CUDA, CuTe, Triton, GPU programming frameworks, Deep learning frameworks (e.g., PyTorch, JAX), Computer science, Electrical engineering, Statistics, Machine learning, Physics, Robotics, Experience training or supporting large-scale language models with tens of billions of parameters or more, Track record of improving research productivity through infrastructure design or process improvements, Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators, Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks, Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM), Contributions to open-source GPU, ML systems, or compiler optimization projects, Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fc79e6e5-5c0"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fc79e6e5-5c0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494156002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","Data engineering","Data science","Cloud technology"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:34.838Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, Data engineering, Data science, Cloud technology","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1a61c510-640"},"title":"Senior Software Engineer (App-Framework)","description":"<p>We are seeking a highly skilled and experienced Senior Software Engineer with a deep understanding of low-level systems to join our team. In this role, you will be instrumental in designing, developing, and optimizing application frameworks that form the building blocks for all software development at Databricks.</p>\n<p>Your expertise will be focused on the intricate details of system internals, performance, and efficiency. You will work closely with other senior engineers and architects to define technical strategies and roadmaps, and collaborate with junior engineers to mentor and contribute to a culture of technical excellence.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain core system infrastructure and low-level software components.</li>\n<li>Optimize system performance, reliability, and scalability through meticulous analysis and innovative solutions.</li>\n<li>Work with JVM internals, memory management, concurrency, and distributed systems.</li>\n<li>Collaborate with other senior engineers and architects to define technical strategies and roadmaps.</li>\n<li>Mentor junior engineers and contribute to a culture of technical excellence.</li>\n<li>Participate in code reviews, design discussions, and architectural decision-making.</li>\n<li>Troubleshoot complex system issues and provide effective resolutions.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Electrical Engineering, or a related field.</li>\n<li>7+ years of professional experience in software development.</li>\n<li>Deep understanding of Java Virtual Machine (JVM) internals, including garbage collection, JIT compilation, class loading, and memory model.</li>\n<li>Proficiency in at least one JVM language and extensive experience with its runtime environment.</li>\n<li>Strong programming skills in Scala/Java/Rust or other systems-level languages.</li>\n<li>Extensive experience with operating system concepts; including processes, threads, concurrency, scheduling, and I/O.</li>\n<li>Proven track record of building and optimizing high-performance, scalable, and reliable systems.</li>\n<li>Experience with distributed systems concepts and technologies.</li>\n<li>Excellent problem-solving, analytical, and debugging skills.</li>\n<li>Strong communication and collaboration abilities</li>\n<li>Experience with performance profiling and tuning tools.</li>\n<li>Contributions to open-source projects related to JVM or systems software.</li>\n<li>Experience with RPC frameworks.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1a61c510-640","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8210383002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java Virtual Machine (JVM) internals","garbage collection","JIT compilation","class loading","memory model","Scala","Java","Rust","operating system concepts","processes","threads","concurrency","scheduling","I/O","distributed systems","performance profiling","tuning tools","RPC frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:33.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java Virtual Machine (JVM) internals, garbage collection, JIT compilation, class loading, memory model, Scala, Java, Rust, operating system concepts, processes, threads, concurrency, scheduling, I/O, distributed systems, performance profiling, tuning tools, RPC frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_85de935a-a75"},"title":"Senior Marketing Operations Manager, Product-Led Growth","description":"<p>Why join us</p>\n<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>\n<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek. Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>\n<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>\n<p>Marketing at Brex</p>\n<p>Marketing tells the story of Brex to the world. From acquisition to activation, we translate product value into business results. Our team spans Revenue, Product, and Brand Marketing, and works closely with nearly every function at Brex. We move fast, experiment often, and think deeply about customer behavior. If you want your creativity to drive growth and shape perception, this is the place.</p>\n<p>What you’ll do</p>\n<p>The Brex Marketing team is looking for an experienced Senior Marketing Operations Manager to own the systems, data infrastructure, and digital growth engine powering our Product-Led Growth (PLG) motion. This role is central to how Brex scales digital acquisition, optimizes self-serve onboarding flows, and unlocks marketing performance through automation, experimentation, and insights.</p>\n<p>The ideal candidate is passionate about building a best-in-class marketing tech stack,including structuring event schemas, improving attribution, unlocking insights, and driving efficiency. This person is also excited about how AI and agentic workflows can transform our operational processes, improve personalization, accelerate experimentation velocity, and automate routine tasks.</p>\n<p>You will help define and execute our future-state marketing operations architecture, modernizing systems, processes, and data across paid, web, product, and lifecycle channels.</p>\n<p>Where you’ll work</p>\n<p>This role will be based in our Seattle office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities</p>\n<ul>\n<li>Own and evolve the PLG martech ecosystem,including Twilio Segment, Google Analytics, Marketo, Sanity CMS, Salesforce, and paid-channel integrations,to ensure a best-in-class, scalable, and reliable infrastructure.</li>\n</ul>\n<ul>\n<li>Define and execute a future-state roadmap for PLG operations, leveraging AI-driven automation, agentic workflows, and scalable systems foundations.</li>\n</ul>\n<ul>\n<li>Build and optimize automated lifecycle and activation programs using AI-assisted segmentation, predictive scoring, and personalized content delivery.</li>\n</ul>\n<ul>\n<li>Partner with Web, Product, and Engineering teams to modernize event tracking frameworks, ensuring clean, structured, privacy-aligned data flows through Segment, GA, and in-product analytics.</li>\n</ul>\n<ul>\n<li>Support and scale experimentation by integrating event tracking, metadata, and insights with AI-enabled analysis and rapid test iteration.</li>\n</ul>\n<ul>\n<li>Collaborate with Paid Growth to ensure high-quality tagging, attribution, and channel measurement across Google Ads, LinkedIn Ads, Meta Ads, and Reddit Ads.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with CX, Operations, Sales, and Web teams to support chatbot and live chat experiences on Brex.com , including qualification logic, routing workflows, data capture, and integration with downstream teams.</li>\n</ul>\n<p>Serve as the Marketing Operations lead ensuring technical implementation, measurement, workflow orchestration, and operational governance, even in a co-owned model.</p>\n<ul>\n<li>Build dashboards and insights leveraging AI-enhanced analytics to monitor PLG health, funnel friction, conversion behavior, and growth loops.</li>\n</ul>\n<ul>\n<li>Identify opportunities to automate manual processes using Zapier, Segment, Marketo programs, and AI agents to improve speed, accuracy, and scale.</li>\n</ul>\n<ul>\n<li>Troubleshoot and resolve issues across systems (Segment, GA, Marketo, Salesforce), maintaining a high-quality data environment and rapid operational velocity.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>4+ years in Marketing Operations, Growth Operations, or Digital Systems roles supporting PLG or digital-first acquisition funnels.</li>\n</ul>\n<ul>\n<li>Hands-on experience with Segment, Google Analytics, Marketo, and event-based tracking frameworks.</li>\n</ul>\n<ul>\n<li>Strong analytical capabilities related to experimentation, attribution, and funnel performance measurement.</li>\n</ul>\n<ul>\n<li>Experience supporting paid acquisition workflows across Google, LinkedIn, Meta, and other digital channels.</li>\n</ul>\n<ul>\n<li>Experience partnering with cross-functional teams (CX, Operations, Sales, Web) to manage or enhance chatbot and/or live chat experiences,including qualification, routing, data models, and workflows,even when operational ownership is distributed across teams.</li>\n</ul>\n<ul>\n<li>Demonstrated experience driving operational efficiency and automation through workflow orchestration tools (Zapier, agentic AI systems, CDP-triggered workflows).</li>\n</ul>\n<ul>\n<li>Track record of evolving or modernizing a marketing tech stack toward a future-state architecture.</li>\n</ul>\n<ul>\n<li>Experience collaborating closely with Product, Web Engineering, and Data teams.</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience at FinTech or SaaS companies with PLG or self-serve onboarding models.</li>\n</ul>\n<ul>\n<li>Experience using AI and automation to scale marketing workflows, such as generative personalization, predictive scoring, and automated experiment QA.</li>\n</ul>\n<ul>\n<li>Comfort building integrated workflows between CMS (Sanity) and acquisition tracking systems.</li>\n</ul>\n<ul>\n<li>Familiarity with customer journey analytics tools such as Amplitude, Mixpanel, or similar.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to document architecture, propose long-term solutions, and operationalize complex systems with cross-functional partners.</li>\n</ul>\n<ul>\n<li>Understanding of digital identity verification steps and risk-aware conversion optimization.</li>\n</ul>\n<ul>\n<li>Familiarity with Lead-to-Product connective processes (where website signups eventually feed the GTM funnel).</li>\n</ul>\n<ul>\n<li>Knowledge of ABM or enterprise programs is a plus for hybrid funnel interactions.</li>\n</ul>\n<p>Compensation</p>\n<p>The expected salary range for this role is $134,696 - $168,370. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p>Please be aware, job-seekers may be at risk of targeting by malicious actors looking for personal data. Brex recruiters will only reach out via LinkedIn or email with a brex.com domain. Any outreach claiming to be from Brex via other sources should be ignored.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_85de935a-a75","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8380681002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$134,696 - $168,370","x-skills-required":["Segment","Google Analytics","Marketo","event-based tracking frameworks","AI-driven automation","agentic workflows","scalable systems foundations","paid-channel integrations","Twilio Segment","Sanity CMS","Salesforce"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:29.061Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Finance","skills":"Segment, Google Analytics, Marketo, event-based tracking frameworks, AI-driven automation, agentic workflows, scalable systems foundations, paid-channel integrations, Twilio Segment, Sanity CMS, Salesforce","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":134696,"maxValue":168370,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_44ad5a7e-cf5"},"title":"Solutions Architect (Taiwan)","description":"<p>We are seeking a Solutions Architect to join our Field Engineering team in Singapore. As a Solutions Architect, you will be responsible for demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges. You will work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in Taiwan, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>\n<li>Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>\n<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Apache Spark.</li>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>\n<li>Proficiency in Mandarin is required as this role serves clients based in Taiwan and involves direct customer communications in Mandarin</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_44ad5a7e-cf5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499585002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Apache Spark","Big Data Analytics","Mandarin"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:23.481Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Apache Spark, Big Data Analytics, Mandarin"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a2470208-732"},"title":"Solutions Architect, Applied AI (Industries)","description":"<p>As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping large enterprises understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack.</p>\n<p>You&#39;ll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex business challenges while maintaining our high standards for safety and reliability.</p>\n<p>Working closely with our Sales, Product, and Engineering teams, you&#39;ll guide customers from initial technical discovery through successful deployment. You&#39;ll leverage your expertise to help customers understand Claude&#39;s capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation</li>\n</ul>\n<ul>\n<li>Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams &amp; stakeholders to drive customer success</li>\n</ul>\n<ul>\n<li>Support customers building with both the Claude API and Claude for Work</li>\n</ul>\n<ul>\n<li>Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering &amp; development teams up to business value focused conversations with executives</li>\n</ul>\n<ul>\n<li>Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack</li>\n</ul>\n<ul>\n<li>Help customers develop evaluation frameworks to measure Claude&#39;s performance for their specific use cases</li>\n</ul>\n<ul>\n<li>Identify common integration patterns and contribute insights back to our Product and Engineering teams</li>\n</ul>\n<ul>\n<li>Travel occasionally to customer sites for workshops, technical deep dives, and relationship building</li>\n</ul>\n<ul>\n<li>Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager</li>\n</ul>\n<ul>\n<li>Native German speaker with fluent English proficiency</li>\n</ul>\n<ul>\n<li>Experience working with enterprise customers, navigating complex buying cycles involving multiple stakeholders</li>\n</ul>\n<ul>\n<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>\n</ul>\n<ul>\n<li>Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders</li>\n</ul>\n<ul>\n<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>\n</ul>\n<ul>\n<li>Comfortable with python</li>\n</ul>\n<ul>\n<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>\n</ul>\n<ul>\n<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>A love of teaching, mentoring, and helping others succeed</li>\n</ul>\n<ul>\n<li>Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>\n</ul>\n<ul>\n<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>\n</ul>\n<p>Annual Salary: €190,000-€215,000 EUR</p>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p>How we&#39;re different:</p>\n<ul>\n<li>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles.</li>\n</ul>\n<ul>\n<li>We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science.</li>\n</ul>\n<ul>\n<li>We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time.</li>\n</ul>\n<ul>\n<li>As such, we greatly value communication skills.</li>\n</ul>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come work with us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a2470208-732","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4977624008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"€190,000-€215,000 EUR","x-skills-required":["Technical customer-facing roles","Solutions Architect","Sales Engineer","Technical Account Manager","Native German speaker","Fluent English proficiency","Experience working with enterprise customers","Complex buying cycles","Multiple stakeholders","Exceptional ability to build relationships","Communicate technical concepts","Diverse stakeholders","C-suite executives","Engineering & IT teams","Strong technical communication skills","Translate customer requirements","Technical and business stakeholders","Experience designing scalable cloud architectures","Integrating with enterprise systems","Comfortable with python","Familiarity with common LLM frameworks and tools","Background in machine learning or data science","Excitement for cross-organizational collaboration","Working through trade-offs","Balancing competing priorities","Love of teaching, mentoring, and helping others succeed","Excellent communication and interpersonal skills","Convey complicated topics in easily understandable terms","Diverse set of external and internal stakeholders","Passion for thinking creatively about how to use technology","Safe and beneficial","Advancing safe AI systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:22.181Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical customer-facing roles, Solutions Architect, Sales Engineer, Technical Account Manager, Native German speaker, Fluent English proficiency, Experience working with enterprise customers, Complex buying cycles, Multiple stakeholders, Exceptional ability to build relationships, Communicate technical concepts, Diverse stakeholders, C-suite executives, Engineering & IT teams, Strong technical communication skills, Translate customer requirements, Technical and business stakeholders, Experience designing scalable cloud architectures, Integrating with enterprise systems, Comfortable with python, Familiarity with common LLM frameworks and tools, Background in machine learning or data science, Excitement for cross-organizational collaboration, Working through trade-offs, Balancing competing priorities, Love of teaching, mentoring, and helping others succeed, Excellent communication and interpersonal skills, Convey complicated topics in easily understandable terms, Diverse set of external and internal stakeholders, Passion for thinking creatively about how to use technology, Safe and beneficial, Advancing safe AI systems","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":215000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d3b8ab18-0ba"},"title":"Solutions Architect - Healthcare/Life Sciences Team (HLS)","description":"<p>We are looking for an experienced Solutions Architect to join our Healthcare/Life Sciences Team (HLS). As a Solutions Architect, you will work with our Enterprise Account Executive to define and direct the technical strategy for our largest and most important accounts. You will lead our customers on a transformational journey, helping them to evaluate and adopt Databricks as part of their strategy.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading virtual teams to success within the account</li>\n<li>Establishing relationships with executives and influencers</li>\n<li>Presenting a convincing point-of-view to important decision-makers</li>\n<li>Implementing the technical strategy in the account</li>\n<li>Building a movement of technical champions within the account</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Proficiency in establishing virtual teams and leading them to success</li>\n<li>Experience working with large, global accounts</li>\n<li>Ability to present a convincing point-of-view to important decision-makers</li>\n<li>Technical expertise in big data, data science, and cloud</li>\n<li>Experience with programming languages such as Python, SQL, or Scala</li>\n</ul>\n<p>Benefits include:</p>\n<ul>\n<li>Competitive salary range of $180,000-$247,500 USD</li>\n<li>Eligibility for annual performance bonus</li>\n<li>Equity</li>\n<li>Comprehensive benefits and perks</li>\n</ul>\n<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d3b8ab18-0ba","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8231231002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["big data","data science","cloud","Python","SQL","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:20.277Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - California"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud, Python, SQL, Scala","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_973b554f-cde"},"title":"Senior Software Engineer - Backend","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a senior software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>\n<p>Our backend teams span many domains across our essential service platforms, including distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>\n<p>You will deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</p>\n<p>You will also build reliable, scalable services using Scala, Kubernetes, and data pipelines using Spark and Databricks to power the pricing infrastructure that serves millions of cluster-hours per day.</p>\n<p>Additionally, you will develop product features that empower customers to easily view and control platform usage.</p>\n<p>We look for candidates with a BS (or higher) in Computer Science or a related field, 3+ years of production-level experience in Java, Scala, C++, or a similar language, experience developing large-scale distributed systems, and good knowledge of SQL.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_973b554f-cde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8029671002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","SQL","Kubernetes","Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:12.827Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, SQL, Kubernetes, Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_02ba8342-079"},"title":"Specialist Solutions Architect - Data Warehousing (Healthcare & Life Sciences)","description":"<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>\n<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>\n<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>\n<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>\n<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective \t+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools \t+ Experience migrating on-premise EDW workloads to the public cloud</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Production programming experience in SQL and Python, Scala, or Java</li>\n<li>Experience with the AWS, Azure, or GCP clouds</li>\n<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>\n<li>2 years customer-facing experience in a pre-sales or post-sales role</li>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_02ba8342-079","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8337429002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["data warehousing","cloud data warehousing","Databricks","lakehouse architecture","SQL","Python","Scala","Java","AWS","Azure","GCP","data analytics","NoSQL","MPP","OLTP","OLAP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:06.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a78c8753-f89"},"title":"Staff Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>\n<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Below are some example projects:</p>\n<ul>\n<li>Apache Spark: Develop the de facto open source standard framework for big data.</li>\n<li>Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming.</li>\n<li>Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines.</li>\n<li>Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science, related technical field or equivalent practical experience.</li>\n<li>Optional: MS or PhD in databases, distributed systems.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Driven by delivering customer value and impact.</li>\n<li>8+ years of production-level experience in either Java, Scala, or C++.</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a78c8753-f89","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544364002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:03.334Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fae6667b-7e0"},"title":"Director of Engineering","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a Director of Engineering, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical initiatives that enhance developer productivity and drive innovation in the developer platform space. You will be at the forefront of integrating AI tools into the developer workflow, shaping the future of AI-assisted development.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Solve real business needs at a large scale by applying your software engineering skills</li>\n<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>\n<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>\n<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>\n<li>Leading and participating in technical, product, and design discussions</li>\n<li>Building, managing, and operating a highly scalable service in the cloud</li>\n<li>Growing leaders on the team by providing coaching, mentorship, and growth opportunities</li>\n<li>Partnering with other engineering and product leaders on planning, prioritisation, and staffing</li>\n<li>Creating a culture of excellence on the team while leading with empathy</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>15+ years of industry experience building and supporting large-scale distributed systems</li>\n<li>Building, growing, and managing high-performance teams</li>\n<li>Ability to attract and hire engineers who meet the Databricks hiring principles</li>\n<li>Existing experience building and running cloud platforms</li>\n</ul>\n<p>*or- demonstrated ability to quickly learn new concepts in the SaaS space (e.g. technical background and fast learner)</p>\n<ul>\n<li>Experience working cross-functionality with product management and directly with customers; ability to deeply understand product and customer personas</li>\n<li>BS in Computer Science or Masters or PhD</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fae6667b-7e0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7896551002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineering","large-scale distributed systems","cloud platforms","technical architecture decisions","product roadmap","team management","leadership development","communication architecture"],"x-skills-preferred":["AI tools","developer workflow","technical debt management","scalable service operation","cloud computing","engineering leadership","product management","customer understanding"],"datePosted":"2026-04-18T15:54:02.837Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, large-scale distributed systems, cloud platforms, technical architecture decisions, product roadmap, team management, leadership development, communication architecture, AI tools, developer workflow, technical debt management, scalable service operation, cloud computing, engineering leadership, product management, customer understanding"}]}