{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/nosql-databases"},"x-facet":{"type":"skill","slug":"nosql-databases","display":"Nosql Databases","count":35},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_867e3558-9a7"},"title":"Team Lead, Java Engineer - Equities Trading Technologies","description":"<p>We are seeking a Team Lead to maintain and enhance our mission-critical, multi-asset trading platform that is used firm-wide daily. This individual will own the existing Java Swing code base, while also playing a pivotal role in designing the next-generation HTML5 trading UI.</p>\n<p>The ideal candidate should have a proven track record in developing and maintaining Java-based front-end applications in the finance sector. Exceptional team collaboration skills and the ability to work effectively with colleagues across global time zones are crucial.</p>\n<p>Millennium strongly prioritizes our synergistic culture, which revolves around teamwork and low egos. You should possess the ability to work in a fast-paced environment both collaboratively and individually while managing multiple projects simultaneously.</p>\n<p>The successful individual will have a strong sense of urgency, emotional intelligence, and prioritize a high-caliber end-user experience.</p>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s degree in computer science or comparable</li>\n<li>7+ years of professional experience with Core Java and Java Swing, electronic trading systems and/or trader workstations environment strongly preferred.</li>\n<li>5+ years of experience working with HTML, JavaScript, CSS, and JQuery</li>\n<li>Deep understanding of multithreading and distributed systems within a high performance, latency-sensitive environment</li>\n<li>Strong knowledge of unit testing frameworks and continuous test-driven development practices</li>\n<li>Enterprise level experience with design patterns such as MVC, MV, MVP</li>\n<li>Enterprise level experience with RESTful web services</li>\n<li>Previous experience liaising with non-technology stakeholders, polished and proactive communication skills</li>\n</ul>\n<p>Beneficial/Ideal Technology Experience:</p>\n<ul>\n<li>EXT-JS, AngularJS, AJAX, JSON experience is very beneficial</li>\n<li>Knowledge of equities, futures, options and other asset classes is preferred</li>\n<li>Enterprise level experience with OMS architecture and design is preferred</li>\n<li>Experience with messaging middleware, Solace preferred</li>\n<li>Experience with relational and NoSQL databases. MongoDB preferred</li>\n<li>Experience working with financial data, including reference data, market data, order/execution and positions data.</li>\n<li>Experience working with Cloud: AWS (preferred), GCP or Azure</li>\n</ul>\n<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_867e3558-9a7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955412056","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Core Java","Java Swing","HTML","JavaScript","CSS","JQuery","Multithreading","Distributed systems","Unit testing frameworks","Continuous test-driven development practices","MVC","MV","MVP","RESTful web services"],"x-skills-preferred":["EXT-JS","AngularJS","AJAX","JSON","Equities","Futures","Options","OMS architecture and design","Messaging middleware","Solace","Relational databases","NoSQL databases","MongoDB","Financial data","Cloud","AWS","GCP","Azure"],"datePosted":"2026-04-18T22:13:00.318Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America · New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Core Java, Java Swing, HTML, JavaScript, CSS, JQuery, Multithreading, Distributed systems, Unit testing frameworks, Continuous test-driven development practices, MVC, MV, MVP, RESTful web services, EXT-JS, AngularJS, AJAX, JSON, Equities, Futures, Options, OMS architecture and design, Messaging middleware, Solace, Relational databases, NoSQL databases, MongoDB, Financial data, Cloud, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7275ef33-009"},"title":"Staff Data Engineer","description":"<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>\n<p>Communicating Between Technical and Non-Technical Colleagues</p>\n<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>\n<p>Data Analysis and Synthesis</p>\n<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>\n<p>Data Development Process</p>\n<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>\n<p>Data Innovation</p>\n<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>\n<p>Data Integration Design</p>\n<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>\n<p>Data Modeling</p>\n<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>\n<p>Metadata Management</p>\n<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>\n<p>Problem Resolution</p>\n<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>\n<p>Programming and Build</p>\n<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>\n<p>Technical Understanding</p>\n<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>\n<p>Testing</p>\n<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7275ef33-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976928777","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$114,400 to $171,600","x-skills-required":["Proficiency in programming language such as Python or Java","Experience with Big Data technologies such as Hadoop, Spark, and Kafka","Familiarity with ETL processes and tools","Knowledge of SQL and NoSQL databases","Strong understanding of relational databases","Experience with data warehousing solutions","Proficiency with cloud platforms","Expertise in data modeling and design","Experience in designing and building scalable data pipelines","Experience with RESTful APIs and data integration"],"x-skills-preferred":["Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)","Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field","Strong analytical and communication skills","Ability to work collaboratively in a team environment","High level of accuracy and attention to detail"],"datePosted":"2026-04-18T22:12:56.654Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdeae865-b3c"},"title":"Senior Software Engineer, Product","description":"<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. As a Senior Software Engineer, Product, you will help develop new products from concept to launch, building customer-facing products that drive direct business impact and shape our long-term technical vision with a high-quality bar.</p>\n<p>You will be responsible for owning functionality and scalability features by taking responsibility from inception to deployment, upholding our high engineering standards and bringing consistency to the codebases, infrastructure, and processes you will encounter.</p>\n<p>We&#39;re looking for folks with an interest in building products and tools, and who are comfortable in dealing with lots of moving pieces. You&#39;ll have the opportunity to learn and push the frontier of providing the best financial software experience to help companies grow.</p>\n<p>This role will be based in our Vancouver Office, which is a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require 3 days per week in the office - Monday, Wednesday, and Thursday.</p>\n<p>As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own functionality and scalability features by taking responsibility from inception to deployment.</li>\n<li>Be responsible for full software development lifecycle - design, development, testing, operating in production.</li>\n<li>Uphold our high engineering standards and bring consistency to the codebases, infrastructure, and processes you will encounter.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of professional experience in a software engineering role or equivalent experience.</li>\n<li>Experience building and designing scalable backend systems.</li>\n<li>Experience working with SQL or NoSQL databases.</li>\n<li>Experience working with backend programming languages (Java, Kotlin, Python).</li>\n<li>Familiarity with software engineering development cycles.</li>\n<li>Ability to hold yourself and the team to high standards.</li>\n<li>Strong communication and interpersonal skills.</li>\n<li>English proficiency/fluency, both written and speaking (note: interviews will be conducted in English).</li>\n</ul>\n<p>Bonus points:</p>\n<ul>\n<li>Experience collaborating with experts in product, design, and operations.</li>\n<li>Familiarity with functional programming languages.</li>\n<li>Experience driving initiatives at a broader level across an organization or company.</li>\n<li>Strong writing skills.</li>\n<li>Proactive approach.</li>\n</ul>\n<p>Compensation: The expected salary range for this role is $192,000 - $240,000 + equity. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdeae865-b3c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8398586002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000 - $240,000 + equity","x-skills-required":["backend programming languages (Java, Kotlin, Python)","SQL or NoSQL databases","software engineering development cycles","high engineering standards","English proficiency/fluency"],"x-skills-preferred":["functional programming languages","collaborating with experts in product, design, and operations","driving initiatives at a broader level across an organization or company","strong writing skills","proactive approach"],"datePosted":"2026-04-18T15:58:42.816Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, British Columbia, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"backend programming languages (Java, Kotlin, Python), SQL or NoSQL databases, software engineering development cycles, high engineering standards, English proficiency/fluency, functional programming languages, collaborating with experts in product, design, and operations, driving initiatives at a broader level across an organization or company, strong writing skills, proactive approach","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_41c3b5ca-002"},"title":"Product Marketing Engineer Intern (AI & Automation) - Intern (Summer/Spring/Fall 2026)","description":"<p>At Cloudflare, we&#39;re looking for a Product Marketing Engineer Intern to join our team. As a Product Marketing Engineer Intern, you will develop AI-powered agents and web applications to automate and scale core marketing and sales functions. You will work closely with our sales and marketing teams to identify gaps and build, launch, and manage the internal lifecycle of your technical solutions.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Engineering GTM tools: Architect and deploy production-ready agents using the Cloudflare stack (Workers, Agents SDK, Sandbox SDK, Durable Objects etc..).</li>\n<li>AI enablement: Design autonomous workflows using Workers AI and Vectorize (RAG) to automate technical case studies, sales enablement, and event messaging.</li>\n<li>Rapid development: Utilize AI coding assistants (Claude Code, Windsurf, or OpenCode) to move from concept to a shipped internal tool in weeks.</li>\n<li>Technical ownership: Partner with sales/marketing to identify gaps, then build, launch, and manage the internal lifecycle of your technical solutions.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Currently pursuing a degree or program in CS or Engineering with a focus on building functional systems.</li>\n<li>Ability to commit to a minimum 12-week summer internship.</li>\n<li>In-office 3-5 days a week in Austin.</li>\n<li>Strong skills in TypeScript/JavaScript, React, and Tailwind CSS, with experience building/consuming RESTful APIs and working with SQL or NoSQL databases.</li>\n<li>Proficiency with Git/GitHub and hands-on experience with AI coding environments (Claude Code, Windsurf, or OpenCode).</li>\n</ul>\n<p>Nice-to-have skills include Cloudflare Native and Startup Experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_41c3b5ca-002","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7781953","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["TypeScript","JavaScript","React","Tailwind CSS","RESTful APIs","SQL","NoSQL databases","Git","GitHub","AI coding environments"],"x-skills-preferred":["Cloudflare Native","Startup Experience"],"datePosted":"2026-04-18T15:57:27.552Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, JavaScript, React, Tailwind CSS, RESTful APIs, SQL, NoSQL databases, Git, GitHub, AI coding environments, Cloudflare Native, Startup Experience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62606eee-53e"},"title":"Software Engineer II, Product","description":"<p>We are seeking a skilled Software Engineer II, Product to join our team in São Paulo. As a member of our Engineering team, you will be responsible for building innovative products and infrastructure for both internal and external users. You will design and build strong, resilient, and robust APIs, libraries, and tools to power Brex engineers and operations. You will own functionality and scalability features by taking responsibility from inception to deployment. You will be responsible for the full software development lifecycle: design, development, testing, and operating in production. You will uphold our high engineering standards and bring consistency to the codebases, infrastructure, and processes you will encounter.</p>\n<p>Requirements:</p>\n<ul>\n<li>3+ years of experience working as a software engineer</li>\n<li>Experience creating libraries and tools for engineers and operations</li>\n<li>Experience building and designing scalable backend systems</li>\n<li>Experience working with SQL or NoSQL databases</li>\n<li>Familiarity with software engineering development cycles</li>\n<li>Experience working with backend programming languages (Java, Kotlin, Python)</li>\n<li>Ability to hold yourself and the team to high standards</li>\n<li>Strong communication and interpersonal skills</li>\n<li>English proficiency/fluency, both written and speaking</li>\n</ul>\n<p>Bonus points:</p>\n<ul>\n<li>Experience collaborating with experts in product, design, and operations</li>\n<li>Familiarity with functional programming languages</li>\n<li>Strong writing skills</li>\n<li>Proactive approach</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62606eee-53e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/7467124002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["APIs","libraries","tools","backend systems","SQL","NoSQL databases","software engineering development cycles","backend programming languages","Java","Kotlin","Python"],"x-skills-preferred":["functional programming languages","writing skills","proactive approach"],"datePosted":"2026-04-18T15:56:27.050Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"São Paulo, São Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"APIs, libraries, tools, backend systems, SQL, NoSQL databases, software engineering development cycles, backend programming languages, Java, Kotlin, Python, functional programming languages, writing skills, proactive approach"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_65911f4d-382"},"title":"Software Engineer, Internship - 2026","description":"<p>Join us at Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets.</p>\n<p>We&#39;re looking for a talented Software Engineer to join our internship program. As a member of our team, you&#39;ll have the opportunity to build solutions that drive direct business impact and shape our long-term technical vision with a high-quality bar.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build products, services, and infrastructure that scale with speed and intention</li>\n<li>Work in partnership with your mentor and team to deliver high-quality solutions</li>\n<li>Learn and push the frontier of providing the best financial software experience to help companies grow</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Graduating from a BA/BS program in 2026</li>\n<li>Computer Science or Computer Engineering or Software Engineering degree</li>\n<li>Availability to participate in the program from April to December 2026</li>\n<li>Strong knowledge of CS fundamentals</li>\n<li>Strong communication, interpersonal skills, and eagerness to learn from others</li>\n<li>Experience working with backend programming languages (i.e. Java, Kotlin, Python)</li>\n<li>Experience working with SQL or NoSQL databases</li>\n</ul>\n<p>Bonus points:</p>\n<ul>\n<li>Experience collaborating with experts in product, design, and operations</li>\n<li>Familiarity with functional programming languages</li>\n<li>Strong writing skills</li>\n<li>Proactive approach</li>\n<li>Experience with corporate financial systems or concepts</li>\n<li>Experience with cloud computing systems &amp; services</li>\n</ul>\n<p>Where you&#39;ll work:</p>\n<p>This role will be remote.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_65911f4d-382","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8434385002","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Java","Kotlin","Python","SQL","NoSQL databases"],"x-skills-preferred":["functional programming languages","cloud computing systems & services","corporate financial systems or concepts"],"datePosted":"2026-04-18T15:55:34.674Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"São Paulo, São Paulo, Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Finance","skills":"Java, Kotlin, Python, SQL, NoSQL databases, functional programming languages, cloud computing systems & services, corporate financial systems or concepts"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8317ba42-502"},"title":"Senior Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>\n<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n</ul>\n<ul>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n</ul>\n<ul>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n</ul>\n<ul>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n</ul>\n<ul>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n</ul>\n<ul>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n</ul>\n<ul>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n</ul>\n<ul>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n</ul>\n<ul>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n</ul>\n<ul>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>\n</ul>\n<ul>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n</ul>\n<ul>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n</ul>\n<ul>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n</ul>\n<ul>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n</ul>\n<ul>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n</ul>\n<ul>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n</ul>\n<ul>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8317ba42-502","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8041698002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:55:32.901Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b9b8ad5-920"},"title":"Senior Software Engineer, Full Stack","description":"<p>Join us</p>\n<p>As a Senior Full-Stack Software Engineer on our Task Workflows Platform team, you will build and scale the foundational infrastructure that powers core experiences across Brex. You will work across our extensive suite of platforms - including our multi-channel notifications engine, collaborative commenting system, and dynamic workflow rule builder.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Work with engineers across the company to build new features and products end-to-end</li>\n<li>Own problems end-to-end, thinking through everything from user experience, data models, scalability, operability and ongoing metrics</li>\n<li>Provide technical leadership to team by driving roadmap direction, architectural design and mentorship</li>\n<li>Work side-by-side with user-facing teams (Sales, Support) to best understand the needs of our customers</li>\n<li>Tune and polish features to a high degree of excellence</li>\n<li>Identify and implement reliability and performance improvements</li>\n<li>Uphold our high engineering standards and bring consistency to the codebases, infrastructure, and processes you will encounter</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>7+ years of professional experience designing, developing, and deploying full-stack products</li>\n<li>Strong proficiency in backend programming languages (Java, Kotlin, Python)</li>\n<li>Experience architecting, building, and maintaining scalable, high-availability distributed systems</li>\n<li>Experience designing and optimizing SQL and/or NoSQL databases, including data modeling, query performance tuning, and schema design</li>\n<li>Experience building and maintaining RESTful APIs and/or GraphQL services</li>\n<li>Experience with modern frontend technologies and tools like ES6+, React, TypeScript, and Webpack</li>\n</ul>\n<p>Bonus points</p>\n<ul>\n<li>Experience collaborating with cross-functional stakeholders and specialists in product, design, and operations</li>\n<li>Experience driving initiatives at a broader level across an organization or company</li>\n<li>Experience running in-product experiments, A/B testing</li>\n<li>Experience and interest in leveraging AI developer tools</li>\n</ul>\n<p>Compensation</p>\n<p>The expected salary range for this role is $192,000 - $240,000. However, the starting base pay will depend on a number of factors, including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b9b8ad5-920","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8472634002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000 - $240,000","x-skills-required":["backend programming languages (Java, Kotlin, Python)","architecting, building, and maintaining scalable, high-availability distributed systems","designing and optimizing SQL and/or NoSQL databases","building and maintaining RESTful APIs and/or GraphQL services","modern frontend technologies and tools like ES6+, React, TypeScript, and Webpack"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:19.892Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend programming languages (Java, Kotlin, Python), architecting, building, and maintaining scalable, high-availability distributed systems, designing and optimizing SQL and/or NoSQL databases, building and maintaining RESTful APIs and/or GraphQL services, modern frontend technologies and tools like ES6+, React, TypeScript, and Webpack","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_31f63648-f2e"},"title":"Software Engineer, Fullstack (Support Core)","description":"<p>We&#39;re seeking an experienced and ambitious Full Stack Software Engineer to join our team in Vancouver, Canada. As a key member of our Product Engineering department, you will take end-to-end ownership of significant features, driving architectural decisions, tackling complex technical hurdles, and ensuring high code quality. Your expertise will enrich the team, including through the active mentorship of junior engineers.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, develop, and deploy high-quality features across Dialpad&#39;s web and desktop-native applications.</li>\n<li>Write clean, modular, and maintainable code using best practices along with unit &amp; integration tests.</li>\n<li>Participate in code reviews to ensure code quality, maintainability, and scalability.</li>\n<li>Ensure that features are shipped on time and with the highest quality.</li>\n<li>Participate in a rotating team on-call schedule to quickly diagnose and resolve critical issues, ensuring a seamless customer experience.</li>\n<li>Collaborate with cross-functional teams to build and use common components and practices across Dialpad products.</li>\n<li>Mentor junior engineers and help them grow their skills and expertise.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years of professional experience in Full-Stack Software Engineering.</li>\n<li>Strong experience with Python, APIs, Vue/React, HTML, CSS, JavaScript, TypeScript, GraphQL, GCP, or other cloud infrastructures, etc.</li>\n<li>Practical experience designing, deploying, and optimizing solutions leveraging serverless computing, microservices, and event-driven architectures.</li>\n<li>Proficiency with both SQL and NoSQL databases.</li>\n<li>Experience with building reusable and modular components for both frontend and backend.</li>\n<li>Experience with mentoring junior engineers and helping them grow their skills.</li>\n<li>Experience with RESTful APIs and GraphQL schemas.</li>\n<li>Bachelor&#39;s degree in Computer Science or equivalent practical experience.</li>\n<li>Proven ability to design, build, launch, and maintain at least two large-scale production systems.</li>\n<li>Experience with Agile development methodologies.</li>\n<li>Strong debugging and troubleshooting skills.</li>\n<li>Strong communication and collaboration skills.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_31f63648-f2e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8407060002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$147,000-$174,000 CAD","x-skills-required":["Python","APIs","Vue/React","HTML","CSS","JavaScript","TypeScript","GraphQL","GCP","serverless computing","microservices","event-driven architectures","SQL","NoSQL databases","RESTful APIs","GraphQL schemas"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:15.798Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, APIs, Vue/React, HTML, CSS, JavaScript, TypeScript, GraphQL, GCP, serverless computing, microservices, event-driven architectures, SQL, NoSQL databases, RESTful APIs, GraphQL schemas","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":147000,"maxValue":174000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67470d3e-174"},"title":"Senior Software Engineer, Product","description":"<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. As a Senior Software Engineer, Product, you will help develop new products from concept to launch, whether for a 5-person startup or a 5,000-person-strong enterprise. You&#39;ll work alongside user-facing teams like Sales and Support to deeply understand our customers&#39; needs and design and implement solutions to improve our customers&#39; banking experiences. You&#39;ll own problems end-to-end, from user experience and data models to scalability, operability, and ongoing metrics.</p>\n<p>You&#39;ll have the opportunity to learn and push the frontier of providing the best financial software experience to help companies grow. You&#39;ll be encouraged to be metric and data-driven and to think creatively to help Brex scale and get prepared for the new markets we are about to enter.</p>\n<p>This role will be based in our Seattle office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67470d3e-174","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8461333002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000-$240,000 + Equity","x-skills-required":["experience building and designing scalable backend systems","experience working with SQL or NoSQL databases","experience working with backend programming languages (Java, Kotlin, Python)","familiarity with software engineering development cycles","ability to hold yourself and the team to high standards"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:00.430Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"experience building and designing scalable backend systems, experience working with SQL or NoSQL databases, experience working with backend programming languages (Java, Kotlin, Python), familiarity with software engineering development cycles, ability to hold yourself and the team to high standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccaf9b53-299"},"title":"Senior Software Engineer","description":"<p>We are seeking a Senior Full Stack Engineer to join our Chorus Engagement team. As a key member of our team, you will be responsible for designing, prototyping, and delivering customer-facing features across the full stack. This includes building and iterating on AI-powered tools that make our vast dataset of sales conversations more digestible and actionable for our users. You will also provide technical leadership, contribute to architectural decisions, and help raise the bar for engineers around you.</p>\n<p>In this role, you will work with engineers, designers, and Product Managers to break down roadmap initiatives into well-scoped, executable work. You will test, document, and maintain a codebase that is an example to our engineers and supports the scale of the business.</p>\n<p>To succeed in this role, you will need 5-10 years of relevant software engineering experience building production web applications. You should have strong proficiency in either backend (Python, Node.js) or frontend (React, Angular) development as your primary discipline, and working knowledge of the other side of the stack. You should also have hands-on experience with relational and NoSQL databases, familiarity with search technologies, and experience with RESTful API design and backend service development.</p>\n<p>Strong interpersonal and communication skills, as well as the ability to take end-to-end ownership, are essential for this role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccaf9b53-299","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8439687002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend development","frontend development","relational databases","NoSQL databases","search technologies","RESTful API design","backend service development"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:38.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend development, frontend development, relational databases, NoSQL databases, search technologies, RESTful API design, backend service development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_adc2d7da-df2"},"title":"Software Engineer III - Python","description":"<p>We are looking for an experienced Software Engineer with a strong focus on backend development to join the Chorus Platform team.</p>\n<p>As a Software Engineer III - Python, you will design and develop complex, large-scale distributed systems that handle millions of customer requests daily. You will take ownership of customer-facing features and continuously deliver improvements that empower users with actionable insights from our platform.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design and Develop: Build and deploy complex, large-scale distributed systems that handle millions of customer requests daily.</li>\n<li>Customer-Facing Innovation: Take ownership of customer-facing features and continuously deliver improvements that empower users with actionable insights from our platform.</li>\n<li>3rd Party Integrations: Develop and integrate with external conferencing and communication platforms using various SDKs and APIs.</li>\n<li>Collaboration: Work closely with cross-functional teams including product managers, data scientists, and front-end engineers to deliver a seamless user experience.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Backend Development: 5+ years of experience in software development, with expertise in Python (preferred), NodeJS, Java, or Scala.</li>\n<li>Distributed Systems: Proven experience in designing and developing distributed microservices and large-scale systems.</li>\n<li>API Development: Strong understanding and hands-on experience with RESTful API standards.</li>\n<li>Cloud Infrastructure: Experience working with cloud-based platforms, ensuring performance, scalability, and security of services.</li>\n<li>Database Management: Familiarity with both NoSQL and SQL databases, optimizing for performance and scalability.</li>\n<li>Communication &amp; Leadership: High interpersonal skills, with the ability to communicate technical ideas clearly and mentor junior engineers.</li>\n<li>Adaptability: A willingness to work with a variety of technologies and to take ownership of different parts of the product.</li>\n</ul>\n<p>Education: BSc in Computer Science, Mathematics, Software Engineering, or equivalent professional experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_adc2d7da-df2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8477031002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Backend Development","Distributed Systems","API Development","Cloud Infrastructure","Database Management"],"x-skills-preferred":["NodeJS","Java","Scala","RESTful API standards","NoSQL databases","SQL databases"],"datePosted":"2026-04-18T15:46:14.372Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, Karnataka, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Backend Development, Distributed Systems, API Development, Cloud Infrastructure, Database Management, NodeJS, Java, Scala, RESTful API standards, NoSQL databases, SQL databases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_80b94e35-0f3"},"title":"Staff Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Staff Technical Solutions Engineer with over 12+ years of experience to join our Platform Support team. This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 12 years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimisation.</li>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_80b94e35-0f3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7845334002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:45:36.598Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6d7f1a0-882"},"title":"Resident Solutions Architect - Mumbai","description":"<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>\n<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>\n<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>\n<li>Providing expert-level technical guidance and support to customers during the implementation process</li>\n<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>\n<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>\n<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>\n<li>Comfortable writing code in Python, PySpark, and Scala</li>\n<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>\n<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>\n<li>Expertise in Azure</li>\n<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>\n<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>\n<li>Ability to work with Partner Organization and deliver complex programs</li>\n<li>Ability to lead large technical delivery teams</li>\n<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>\n<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>\n<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>\n<li>Willingness to travel for onsite customer engagements within India</li>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<p>Good-to-have Skills:</p>\n<ul>\n<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>\n<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>\n<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>\n<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>\n<li>Expertise on cloud platforms like AWS and GCP</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6d7f1a0-882","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8107166002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Data Lakes","Python","PySpark","Scala","SQL","Spark SQL","Spark-streaming","Azure","NoSQL databases","data governance","security","Unity Catalog","RBAC"],"x-skills-preferred":["ML libraries/frameworks","MLOps tools and processes","LLM and GenAI solutions","Hadoop","Trino","Ranger","AWS","GCP"],"datePosted":"2026-04-18T15:45:04.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mumbai, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_456f029f-2e2"},"title":"Principal Software Engineer","description":"<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>\n<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>\n<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>\n<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>\n<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>\n<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>\n<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>\n<li>Implement CDC pipelines and calculated field processing for derived data views.</li>\n<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>\n<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>\n<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>\n<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>\n<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>\n<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>\n<li>10+ years of software engineering experience building large-scale data platforms.</li>\n<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>\n<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>\n<li>Proven experience with GCP or AWS and cloud-native architectures.</li>\n<li>Experience with streaming/real-time data processing technologies.</li>\n<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>\n<li>Hands-on experience with Google Cloud Platform services.</li>\n<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>\n<li>Experience solving data freshness and consistency challenges in distributed systems.</li>\n<li>Background in building observability and monitoring solutions for data platforms.</li>\n<li>Familiarity with metadata management and schema evolution.</li>\n<li>Experience with Kubernetes for deploying data services.</li>\n<li>SQL query optimization and performance tuning expertise.</li>\n<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>\n<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>\n<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>\n<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>\n<li>Passion for building reliable, observable, and maintainable systems.</li>\n<li>Experience promoting diverse, inclusive work environments.</li>\n</ul>\n<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>\n<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>\n<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>\n<p>$163,800-$257,400 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_456f029f-2e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8243004002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,800-$257,400 USD","x-skills-required":["Java 8+","Scala","Kotlin","GoLang","GCP","AWS","cloud-native architectures","streaming/real-time data processing technologies","distributed NoSQL databases","data warehousing systems","metadata management","schema evolution","Kubernetes","SQL query optimization","performance tuning","GraphQL APIs","federated or metadata-driven schema generation"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:17.604Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-US-CA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163800,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c418ca42-093"},"title":"Software Engineer (L3)","description":"<p>We&#39;re shaping the future of communications at Twilio, delivering innovative solutions to hundreds of thousands of businesses and empowering millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work and strong culture of connection and global inclusion means that no matter your location, you&#39;re part of a vibrant team with diverse experiences making a global impact each day.</p>\n<p>This position is needed to expand the cross-channel conversations team in Twilio. The team is working on building the next-generation omni-channel platform and data layer that bridges all Twilio channels, including Voice, Messaging, and Email, to empower businesses to engage with millions of customers at scale with highly personalized, data-driven interactions.</p>\n<p>As a Software Engineer (L3), you&#39;ll collaborate with other team members, including our Product team, to help influence, own, and improve our product offering. You&#39;ll design, build, and maintain massively scalable, highly reliable, and resilient Java and Go services hosted in AWS cloud working in a fast-growing and engineering-focused company.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with other team members, including our Product team, to help influence, own, and improve our product offering</li>\n<li>Design, build, and maintain massively scalable, highly reliable, and resilient Java and Go services hosted in AWS cloud working in a fast-growing and engineering-focused company</li>\n<li>Design customer-focused and scalable software in a distributed system</li>\n<li>Work with other engineering teams and with members of the product organization to distill internal and customer needs into requirements and tech specs</li>\n<li>Build features without having all the answers; adopt new technologies and strategies where applicable</li>\n<li>Cultivate a culture of ownership and growth by mentoring engineers and sharing technical expertise</li>\n<li>Prevent, troubleshoot, and investigate production incidents by developing automated remediation processes</li>\n<li>Participate in Agile ceremonies for software development and contribute to the testing of our code and the delivery of what we create to production</li>\n<li>Use AI coding assistants like Claude and Codex to streamline and automate your development process</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>5+ years experience in designing, building, and operating high-scale, mission-critical cloud-based production systems</li>\n<li>Extensive experience with RESTful API design and development</li>\n<li>Bachelor&#39;s degree in Computer Science or related industry experience</li>\n<li>Familiarity with Java or Go, and applying best practice coding standards and writing high-quality code</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Experience with AWS, GCP, or other public cloud providers</li>\n<li>Good understanding and hands-on experience working with SQL and NoSQL databases</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Prior experience building large-scale, high-throughput datastores on DynamoDB</li>\n<li>Experience building and deploying multi-regional systems</li>\n<li>Experience with container technology - Docker/Kubernetes</li>\n</ul>\n<p>What We Offer:</p>\n<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>\n<p>Compensation:</p>\n<p>The estimated pay ranges for this role are as follows:</p>\n<ul>\n<li>Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont, or Washington D.C.: $138,300-$173,000</li>\n<li>Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $146,200-$183,600</li>\n<li>Based in the San Francisco Bay area, California: $163,100-$203,900</li>\n</ul>\n<p>This role may be eligible to participate in Twilio&#39;s equity plan and corporate bonus plan. All roles are generally eligible for the following benefits: health care insurance, 401(k) retirement account, paid sick time, paid personal time off, paid parental leave.</p>\n<p>The successful candidate&#39;s starting salary will be determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c418ca42-093","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7301401","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$138,300-$173,000","x-skills-required":["Java","Go","RESTful API design","AWS","GCP","SQL","NoSQL databases"],"x-skills-preferred":["DynamoDB","container technology - Docker/Kubernetes"],"datePosted":"2026-04-18T15:40:10.356Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Go, RESTful API design, AWS, GCP, SQL, NoSQL databases, DynamoDB, container technology - Docker/Kubernetes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":138300,"maxValue":173000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4f79afe6-cd5"},"title":"Software Engineer - Platform","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design &amp; Develop Scalable Systems: Build and maintain core platform services with a focus on performance, reliability, and scalability.</li>\n<li>Infrastructure &amp; Data Platforms: Develop and improve infrastructure for data storage and processing – for example, high-performance databases and modern data processing frameworks – to handle Plaid&#39;s growing data and product needs.</li>\n<li>Developer Productivity Tools: Create internal tools, frameworks, and automation to improve developer productivity and efficiency.</li>\n<li>Security &amp; Privacy by Design: Integrate security, privacy, and compliance best practices into our platforms (e.g. data encryption, access controls, audit logging) to protect sensitive financial data.</li>\n<li>Cross-Team Collaboration: Work hand-in-hand with product engineers and other stakeholders to understand requirements and translate them into reliable platform capabilities.</li>\n<li>Technical Excellence &amp; Leadership: Uphold high engineering standards through code reviews, testing, and documentation.</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>Experience: 2 to 4 years of software engineering experience, with a proven track record of building and shipping complex backend systems or platforms.</li>\n<li>Strong Coding Skills: Proficiency in at least one general-purpose programming language (e.g. Go, Python, Java, C++).</li>\n<li>Distributed Systems &amp; Problem Solving: Deep understanding of system design and algorithms.</li>\n<li>Data and Databases: Familiarity with relational and NoSQL database technologies (for example, MySQL/TiDB, PostgreSQL, MongoDB).</li>\n<li>Collaboration &amp; Communication: Excellent communication and teamwork skills, with the ability to work effectively in a cross-functional environment.</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4f79afe6-cd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/2b9a141e-0669-4197-aa52-2b07d9fadc96","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$176,400-$243,600 per year","x-skills-required":["Go","Python","Java","C++","Distributed Systems","System Design","Algorithms","Relational Databases","NoSQL Databases"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:52:25.301Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Go, Python, Java, C++, Distributed Systems, System Design, Algorithms, Relational Databases, NoSQL Databases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":176400,"maxValue":243600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd7fb909-289"},"title":"Web Crawling Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are looking for a skilled and motivated Web Crawling Engineer to join our dynamic engineering team. The ideal candidate should have a solid background in distributed web crawling, scraping and data extraction, with experience using advanced tools and technologies to collect and process large-scale data from diverse web sources at large scale.</p>\n<p>Responsibilities</p>\n<p>As a Web crawling engineer, you will be responsible for:</p>\n<ul>\n<li>Developing and maintaining web crawlers using Go to extract data from target websites.</li>\n<li>Utilizing headless browsing techniques, such as Chrome DevTools, to automate and optimize data collection processes.</li>\n<li>Collaborating with cross-functional teams to identify, scrape, and integrate data from APIs and web pages to support business objectives.</li>\n<li>Creating and implementing efficient parsing patterns using tokenizers, regular expressions, XPaths, and CSS selectors to ensure accurate data extraction.</li>\n<li>Designing and managing distributed job queues using technologies such as Redis, Aerospike and Kubernetes to handle large-scale distributed crawling and processing tasks.</li>\n<li>Developing strategies to monitor and ensure data quality, accuracy, and integrity throughout the crawling and indexing process.</li>\n<li>Continuously improving and optimizing existing web crawling infrastructure to maximize efficiency and adapt to new challenges.</li>\n</ul>\n<p>About You</p>\n<p>Core programming and web technologies</p>\n<ul>\n<li>Proficiency in Go (Golang)/Rust/Zig for building scalable and efficient web crawlers.</li>\n<li>Deep understanding of TCP, UDP, TLS and HTTP/1.1,2,3 protocols and web communication.</li>\n<li>Knowledge of HTML, CSS, and JavaScript for parsing and navigating web content.</li>\n<li>Familiarity with cloud platforms (AWS, GCP), orchestration (Kubernetes, Nomad), and containerization (Docker) for deployment.</li>\n</ul>\n<p>Data Structures &amp; Algorithms</p>\n<ul>\n<li>Mastery of queues, stacks, hash maps, and other data structures for efficient data handling.</li>\n<li>Ability to design and optimize algorithms for large-scale web crawling.</li>\n</ul>\n<p>Web Scraping &amp; Data Acquisition</p>\n<ul>\n<li>Hands-on experience with networking and web scraping libraries.</li>\n<li>Understanding of how search engines work and best practices for web crawling optimization.</li>\n</ul>\n<p>Databases &amp; Data Storage</p>\n<ul>\n<li>Experience with SQL and/or NoSQL databases (knowing Aerospike is a bonus) for storing and managing crawled data.</li>\n<li>Familiarity with data warehousing and scalable storage solutions.</li>\n</ul>\n<p>Distributed Systems &amp; Big Data</p>\n<ul>\n<li>Knowledge of distributed systems (e.g., Hadoop, Spark) for processing large datasets.</li>\n</ul>\n<p>Bonus Skills (Nice-to-Have)</p>\n<ul>\n<li>Experience with web archiving projects &amp; tooling, open-source archiving is a big plus!</li>\n<li>Experience applying Machine Learning to improve crawling efficiency or accuracy.</li>\n<li>Experience with low-level networking programming and/or userspace TCP/IP stacks.</li>\n</ul>\n<p>Hiring Process</p>\n<p>Here is what you should expect:</p>\n<ul>\n<li>Introduction call - 35 min</li>\n<li>Hiring Manager Interview - 30 min</li>\n<li>Live-coding Interview - 45 min</li>\n<li>System Design Interview - 45 min</li>\n<li>Deep dive interview (optional) - 60min</li>\n<li>Culture-fit discussion - 30 min</li>\n<li>Reference checks</li>\n</ul>\n<p>Additional Information</p>\n<p>Location &amp; Remote</p>\n<p>This role is primarily based in one of our European offices , Paris, France and London, UK. We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In any case, we ask all new hires to visit our Paris HQ office:</p>\n<ul>\n<li>for the first week of their onboarding (accommodation and travelling covered)</li>\n<li>then at least 2 days per month</li>\n</ul>\n<p>What we offer</p>\n<p>💰 Competitive salary and equity</p>\n<p>🧑‍⚕️ Health insurance</p>\n<p>🚴 Transportation allowance</p>\n<p>🥎 Sport allowance</p>\n<p>🥕 Meal vouchers</p>\n<p>💰 Private pension plan</p>\n<p>🍼 Parental : Generous parental leave policy</p>\n<p>🌎 Visa sponsorship</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd7fb909-289","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/c96bf665-7d73-406b-8d8f-ddf8df5d160f","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Rust","Zig","TCP","UDP","TLS","HTTP/1.1","HTTP/2","HTTP/3","HTML","CSS","JavaScript","cloud platforms","orchestration","containerization","queues","stacks","hash maps","SQL","NoSQL databases","data warehousing","scalable storage solutions","distributed systems","Hadoop","Spark"],"x-skills-preferred":["web archiving projects","Machine Learning","low-level networking programming","userspace TCP/IP stacks"],"datePosted":"2026-04-17T12:48:06.790Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Rust, Zig, TCP, UDP, TLS, HTTP/1.1, HTTP/2, HTTP/3, HTML, CSS, JavaScript, cloud platforms, orchestration, containerization, queues, stacks, hash maps, SQL, NoSQL databases, data warehousing, scalable storage solutions, distributed systems, Hadoop, Spark, web archiving projects, Machine Learning, low-level networking programming, userspace TCP/IP stacks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd034e01-768"},"title":"Senior Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI.\nThe future of work is here, and it&#39;s at Cresta.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta&#39;s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta&#39;s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>\n<li>Flexible PTO to take the time you need, when you need it.</li>\n<li>Paid parental leave for all new parents welcoming a new child.</li>\n<li>Retirement savings plan to help you plan for the future.</li>\n<li>Remote work setup budget to help you create a productive home office.</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>\n<li>In-office meal program and commuter benefits provided for onsite employees.</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<ul>\n<li>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>\n<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>\n</ul>\n<p>Salary Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd034e01-768","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5133464008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000 + Offers Equity","x-skills-required":["backend system architecture","cloud services","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:37.299Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52ba7bfb-60e"},"title":"Senior Software Engineer, Backend (AI Agent Quality)","description":"<p>Join us on a mission to revolutionize the workforce with AI.</p>\n<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52ba7bfb-60e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4062453008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:52.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3c253ad-38b"},"title":"Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI. The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p><strong>About the Role:</strong> As a Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p><strong>Qualifications We Value:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>2+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Knowledge in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n<li>Bonus: experience working with Virtual Agent or AI Agent systems.</li>\n</ul>\n<p><strong>Perks &amp; Benefits:</strong></p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3c253ad-38b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4325729008","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:22.648Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51fb35f8-ae2"},"title":"Data Engineer","description":"<p>We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining large-scale data systems and pipelines. You will work closely with cross-functional teams to ensure seamless integration with existing systems and to drive business growth through data-driven insights.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop scalable data architectures using cloud-based technologies such as AWS and Azure</li>\n<li>Develop and maintain ETL processes to extract, transform, and load data from various sources</li>\n<li>Collaborate with data scientists to develop and deploy machine learning models</li>\n<li>Ensure data quality, security, and compliance with regulatory requirements</li>\n<li>Work with stakeholders to identify business needs and develop data solutions</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field</li>\n<li>3+ years of experience in data engineering or a related field</li>\n<li>Strong understanding of data architecture, design patterns, and best practices</li>\n<li>Experience with cloud-based technologies such as AWS and Azure</li>\n<li>Proficiency in programming languages such as Python, Java, or C++</li>\n<li>Excellent problem-solving skills and attention to detail</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science, Engineering, or related field</li>\n<li>Experience with big data technologies such as Hadoop, Spark, or NoSQL databases</li>\n<li>Familiarity with data visualization tools such as Tableau, Power BI, or D3.js</li>\n<li>Certification in data engineering or a related field</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading technology business</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51fb35f8-ae2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Williams Advanced Engineering","sameAs":"https://www.williamsadvancedengineering.com/","logo":"https://logos.yubhub.co/williamsadvancedengineering.com.png"},"x-apply-url":"https://careers.williamsf1.com/job/trackside-operations-lead-hospitality-in-london-jid-494","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Azure","Python","Java","C++","ETL","data architecture","data design patterns","data quality","data security","regulatory compliance"],"x-skills-preferred":["Hadoop","Spark","NoSQL databases","Tableau","Power BI","D3.js"],"datePosted":"2026-03-12T12:01:28.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Grove"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, Azure, Python, Java, C++, ETL, data architecture, data design patterns, data quality, data security, regulatory compliance, Hadoop, Spark, NoSQL databases, Tableau, Power BI, D3.js"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3437e4dc-7d6"},"title":"后端工程师 (Kotlin) - 深圳, 中国 (Senior)","description":"<p><strong>Job Overview</strong></p>\n<p>We are seeking a senior backend engineer to join our team in Shenzhen, China. As a senior backend engineer, you will be responsible for designing, developing, and optimizing our core backend systems, including payment gateways, settlement systems, mobile POS integrations, and financial service APIs.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, develop, and optimize high-performance backend services and APIs (REST/gRPC) using Kotlin or Java</li>\n<li>Participate in backend architecture design to ensure system high availability and scalability</li>\n<li>Troubleshoot and optimize system performance issues to ensure system stability</li>\n<li>Collaborate closely with frontend, product, and testing teams to drive project delivery</li>\n<li>Write high-quality, maintainable code and participate in code reviews</li>\n<li>Stay up-to-date with the latest trends in backend and cloud technologies</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of experience in backend development (Kotlin or Java)</li>\n<li>Familiarity with microservices architecture, cloud deployment (AWS/GCP), and CI/CD pipelines (GitHub Actions)</li>\n<li>Proficiency in SQL/NoSQL databases (PostgreSQL, MySQL, Redis)</li>\n<li>Familiarity with Docker, Kubernetes, and message queue systems (Kafka)</li>\n<li>Familiarity with API design (REST/gRPC) and version control (GitHub)</li>\n<li>Familiarity with agile development processes and good team collaboration skills</li>\n<li>Good English communication skills (Mandarin or Cantonese is a plus)</li>\n<li>Experience in financial technology, payment, or settlement systems is a plus</li>\n</ul>\n<p><strong>Preferred Skills</strong></p>\n<ul>\n<li>High concurrency and large-scale system development experience</li>\n<li>Understanding of DevOps, CI/CD pipelines, and ability to drive automation deployment and operations</li>\n<li>Experience in distributed systems or data architecture design</li>\n<li>Contributions to open-source communities or personal technical blogs</li>\n</ul>\n<p><strong>Why Join Kody?</strong></p>\n<ul>\n<li>Global technology company with offices in Singapore, London, and Hong Kong</li>\n<li>Flexible work arrangements in Shenzhen and Hong Kong</li>\n<li>Technology-driven culture with engineers having core influence on product decisions</li>\n<li>Challenging projects, including large-scale backend architecture design and optimization</li>\n<li>Competitive salary and benefits to reward your technical contributions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3437e4dc-7d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Kody","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/nt225i4DCrp5uzosmH4FXU/%E5%90%8E%E7%AB%AF%E5%B7%A5%E7%A8%8B%E5%B8%88-(kotlin)---%E6%B7%B1%E5%9C%B3%2C-%E4%B8%AD%E5%9B%BD-(senior)-in-shenzhen-at-kody","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kotlin","Java","microservices architecture","cloud deployment","CI/CD pipelines","SQL/NoSQL databases","Docker","Kubernetes","message queue systems","API design","version control","agile development processes"],"x-skills-preferred":["high concurrency and large-scale system development experience","DevOps","CI/CD pipelines","distributed systems or data architecture design","open-source communities","personal technical blogs"],"datePosted":"2026-03-09T17:01:45.087Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"深圳, 中国"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Kotlin, Java, microservices architecture, cloud deployment, CI/CD pipelines, SQL/NoSQL databases, Docker, Kubernetes, message queue systems, API design, version control, agile development processes, high concurrency and large-scale system development experience, DevOps, CI/CD pipelines, distributed systems or data architecture design, open-source communities, personal technical blogs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f98eac1-9e4"},"title":"Backend Kotlin Developer (Senior)","description":"<p><strong>Job Description</strong></p>\n<p>At Kody, we&#39;re redefining how businesses take payments and access financial services. As a fast-growing Fintech, we partner with some of the most recognised names in hospitality, F&amp;B, and retail across Hong Kong, helping them modernise payments, settlement, and digital financial experiences.</p>\n<p>Our Hong Kong tech team runs like a startup within a scaling company, i.e. agile, innovative, and deeply product-driven. We&#39;re now looking for a Senior Backend Engineer to help us build the next generation of payment and settlement infrastructure.</p>\n<p><strong>Why Join Us?</strong></p>\n<ul>\n<li>Work with industry-leading brands in Hong Kong and across the region</li>\n<li>Join a fast-moving, innovative tech culture with the stability of a growing international company</li>\n<li>Build with modern tools and stacks: Kotlin, Kafka, Kubernetes, Redis, PostgreSQL, and cloud-native frameworks</li>\n<li>Enjoy flexibility while working with a passionate, highly skilled engineering team</li>\n<li>Access learning and development and the opportunity to travel for collaboration with our global tech teams</li>\n</ul>\n<p><strong>Your Role</strong></p>\n<p>As a Senior Backend Engineer, you&#39;ll be at the heart of our platform, designing and scaling systems that power Kody&#39;s products, from payment gateways and settlement systems to mobile POS integrations and financial service APIs.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and maintain backend services and APIs (REST/gRPC)</li>\n<li>Contribute to backend architecture and ensure system reliability and scalability</li>\n<li>Debug, troubleshoot, and optimise performance in production systems</li>\n<li>Collaborate with product, QA, and frontend teams to deliver robust, maintainable solutions</li>\n<li>Write clean, efficient, and well-tested code</li>\n<li>Stay current with modern backend and cloud technologies</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of experience in backend software development (Kotlin or Java)</li>\n<li>Strong understanding of microservices, cloud deployment (AWS/GCP), and CI/CD pipelines (GitHub Actions)</li>\n<li>Proficiency in SQL/NoSQL databases (PostgreSQL, MySQL, Redis)</li>\n<li>Familiar with Docker, Kubernetes, and event-driven systems (Kafka)</li>\n<li>Solid understanding of APIs (REST, gRPC) and version control (GitHub)</li>\n<li>Familiarity with agile development processes</li>\n<li>Fluent in English (Mandarin or Cantonese a plus)</li>\n<li>Fintech, payment, or settlement background is a plus</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and meaningful work impact</li>\n<li>Equity available</li>\n<li>Flexibility and a global, tech-driven culture</li>\n<li>Career growth opportunities within a fast-scaling fintech</li>\n<li>Work with ambitious teammates who think big and execute fast</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f98eac1-9e4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Kody","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/qAZLgjp7A7TbnzrkimKdmy/backend-kotlin-developer-(senior)-in-admiralty-at-kody","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kotlin","Kafka","Kubernetes","Redis","PostgreSQL","cloud-native frameworks","microservices","cloud deployment","CI/CD pipelines","SQL/NoSQL databases","Docker","event-driven systems"],"x-skills-preferred":[],"datePosted":"2026-03-09T17:00:45.220Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Admiralty, Kowloon, Hong Kong"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Kotlin, Kafka, Kubernetes, Redis, PostgreSQL, cloud-native frameworks, microservices, cloud deployment, CI/CD pipelines, SQL/NoSQL databases, Docker, event-driven systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_055a769a-c68"},"title":"后端工程师 (Kotlin) - 深圳, 中国 (Senior)","description":"<p><strong>Job Overview</strong></p>\n<p>As a Senior Backend Engineer at Kody, you will be responsible for designing, developing, and optimizing our core backend systems, including payment gateways, settlement systems, mobile POS integrations, and financial service APIs.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, develop, and optimize high-performance backend services and APIs (REST/gRPC) using Kotlin or Java</li>\n<li>Participate in backend architecture design to ensure system high availability and scalability</li>\n<li>Troubleshoot and optimize system performance issues to ensure system stability</li>\n<li>Collaborate closely with frontend, product, and testing teams to drive project delivery</li>\n<li>Write high-quality, maintainable code and participate in code reviews</li>\n<li>Stay up-to-date with the latest developments in backend and cloud technologies</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of experience in backend development (Kotlin or Java)</li>\n<li>Familiarity with microservices architecture, cloud deployment (AWS/GCP), and CI/CD pipelines (GitHub Actions)</li>\n<li>Familiarity with SQL/NoSQL databases (PostgreSQL, MySQL, Redis)</li>\n<li>Familiarity with Docker, Kubernetes, and message queue systems (Kafka)</li>\n<li>Familiarity with API design (REST/gRPC) and version control (GitHub)</li>\n<li>Familiarity with agile development processes and good team collaboration skills</li>\n<li>Good English communication skills (Mandarin or Cantonese is a plus)</li>\n<li>Experience in financial technology, payment, or settlement systems is a plus</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience in high-concurrency, large-scale system development</li>\n<li>Understanding of DevOps, CI/CD pipelines, and ability to drive automation deployment and operations</li>\n<li>Experience in distributed systems or data architecture design</li>\n<li>Contributions to open-source communities or personal technical blogs</li>\n</ul>\n<p><strong>Why Join Kody?</strong></p>\n<ul>\n<li>Global technology company with offices in Singapore, London, and Hong Kong</li>\n<li>Flexible work arrangements in Shenzhen and Hong Kong</li>\n<li>Technology-driven culture with engineers having core influence on product decisions</li>\n<li>Challenging projects, including large-scale backend architecture design and optimization</li>\n<li>Competitive salary and benefits to reward your technical contributions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_055a769a-c68","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Kody","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/2mZgGN3RMgykf9oavxvD49/%E5%90%8E%E7%AB%AF%E5%B7%A5%E7%A8%8B%E5%B8%88-(kotlin)---%E6%B7%B1%E5%9C%B3%2C-%E4%B8%AD%E5%9B%BD-(senior)-in-shenzhen-at-kody","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kotlin","Java","microservices architecture","cloud deployment","CI/CD pipelines","SQL/NoSQL databases","Docker","Kubernetes","message queue systems","API design","version control","agile development processes"],"x-skills-preferred":["high-concurrency","large-scale system development","DevOps","CI/CD pipelines","distributed systems","data architecture design","open-source communities","personal technical blogs"],"datePosted":"2026-03-09T16:58:56.332Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"深圳, 中国"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Kotlin, Java, microservices architecture, cloud deployment, CI/CD pipelines, SQL/NoSQL databases, Docker, Kubernetes, message queue systems, API design, version control, agile development processes, high-concurrency, large-scale system development, DevOps, CI/CD pipelines, distributed systems, data architecture design, open-source communities, personal technical blogs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8b0e9386-fa9"},"title":"Data Engineering & Data Science Consultant","description":"<p><strong>Data Engineering &amp; Data Science Consultant</strong></p>\n<p>You will work hands-on on the design, build, and operationalisation of modern data and analytics solutions. You will contribute across the full lifecycle – from data ingestion and transformation to analytics, machine learning, and production deployment. You will collaborate closely with data engineers, architects, data scientists, and business stakeholders to deliver scalable, reliable, and value-driven data solutions in complex client environments.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Apply data science and machine learning techniques to real-world business problems</li>\n<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>\n<li>Develop and optimise data transformations for analytical and machine learning workloads</li>\n<li>Support the productionisation of data and ML solutions, including monitoring and optimisation</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3–5 years of experience in data engineering, data science, or analytics</li>\n<li>Hands-on experience delivering data and analytics solutions in project-based or client environments</li>\n<li>Strong problem-solving skills and a pragmatic, delivery-oriented mindset</li>\n</ul>\n<p><strong>Data Engineering Foundations</strong></p>\n<ul>\n<li>Experience building end-to-end data pipelines (ingestion, transformation, storage)</li>\n<li>Solid understanding of data modelling, data transformations, and feature engineering</li>\n<li>Familiarity with cloud-based data platforms, such as Azure, AWS, or GCP</li>\n</ul>\n<p><strong>Applied Data Science &amp; Analytics</strong></p>\n<ul>\n<li>Experience applying statistical analysis and machine learning techniques</li>\n<li>Strong programming skills in Python</li>\n<li>Very good SQL skills and experience working with relational databases</li>\n</ul>\n<p><strong>Nice to have</strong></p>\n<ul>\n<li>Experience with streaming technologies (e.g. Kafka, Azure Event Hubs)</li>\n<li>Exposure to GenAI, NLP, time series, or advanced analytics use cases</li>\n<li>Experience with NoSQL databases (e.g. MongoDB, Cosmos DB)</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8b0e9386-fa9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/43f8dm12rcrpZUsa228TbZ/data-engineering-%26-data-science-consultant-in-london-at-infosys-consulting---europe","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data science","machine learning","data engineering","cloud-based data platforms","data modelling","data transformations","feature engineering","Python","SQL","relational databases"],"x-skills-preferred":["streaming technologies","GenAI","NLP","time series","advanced analytics","NoSQL databases"],"datePosted":"2026-03-09T16:58:07.007Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data science, machine learning, data engineering, cloud-based data platforms, data modelling, data transformations, feature engineering, Python, SQL, relational databases, streaming technologies, GenAI, NLP, time series, advanced analytics, NoSQL databases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aafa7b92-fa6"},"title":"Senior Consultant - Data Engineering & Data Science (m/w/d)","description":"<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most important challenges of our clients? We are growing further and looking for enthusiastic individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>\n<p>Our dynamic organisation allows you to work across topics and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>\n<p>As a Consultant/Senior Consultant in the Data Engineering &amp; Data Science field, you will work hands-on on the conception, development, and implementation of modern data and analytics solutions. You will support the entire project lifecycle - from data intake and transformation to analytics and machine learning to productive operation.</p>\n<p>You will work closely with data engineers, architects, data scientists, and subject matter experts to implement scalable, reliable, and value-adding solutions in complex customer environments.</p>\n<p><strong>Your Tasks</strong></p>\n<ul>\n<li>Apply data science methods (machine learning, deep learning, GenAI) to solve concrete business questions</li>\n<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>\n<li>Set up data pipelines for analytical workloads</li>\n<li>Support the productive implementation of data and ML solutions, including monitoring and optimisation</li>\n</ul>\n<p><strong>What You Bring - Required</strong></p>\n<ul>\n<li>At least 3 years of relevant professional experience in the field of data engineering, data science, or analytics</li>\n<li>Hands-on experience in implementing data and analytics solutions in (customer) projects</li>\n<li>Strong problem-solving skills and a pragmatic, implementation-oriented way of working</li>\n</ul>\n<p><strong>Data Engineering Fundamentals</strong></p>\n<ul>\n<li>Experience in setting up data pipelines (ingestion, transformation, storage)</li>\n<li>Solid understanding of data modeling, data transformations, and feature engineering</li>\n<li>Experience with cloud-based data platforms, such as:</li>\n</ul>\n<ol>\n<li>Azure, AWS, or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse/Microsoft Fabric</li>\n</ol>\n<ul>\n<li>Knowledge of CI/CD concepts and production-ready deployments</li>\n</ul>\n<p><strong>Applied Data Science &amp; Analytics</strong></p>\n<ul>\n<li>Experience in applying GenAI, deep learning, and machine learning procedures as well as statistical analyses</li>\n<li>Very good programming skills in Python</li>\n<li>Very good SQL skills and experience with relational databases</li>\n<li>Experience in deploying and productively using ML models</li>\n<li>Ability to translate analytical results into business-relevant insights</li>\n<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with:</li>\n</ul>\n<ol>\n<li>Streaming technologies (e.g. Kafka, Azure Event Hubs)</li>\n<li>Time series analysis, NLP applications, or system modeling</li>\n<li>NoSQL databases (e.g. MongoDB, Cosmos DB)</li>\n<li>Docker and Kubernetes</li>\n<li>Data visualization tools like Power BI, Tableau</li>\n<li>Cloud or architecture certifications</li>\n</ol>\n<p><strong>Language &amp; Mobility (Germany)</strong></p>\n<ul>\n<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>\n<li>Very good English skills</li>\n<li>Project-related travel readiness</li>\n</ul>\n<p><strong>Your Team</strong></p>\n<p>You will become part of our growing Data &amp; Analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, and Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>You will become an employee of a globally renowned management consulting firm at the forefront of technological innovation and industrial transformation. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>\n<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>\n<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is ranked among the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five consecutive years.</p>\n<p>We offer a market-leading salary, attractive additional benefits, and excellent opportunities for further education and development. Have you become curious? Then we look forward to your application - apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aafa7b92-fa6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/ecAfMkjFkA97qaoimVMGNF/hybrid-(senior)-consultant---data-engineering-%26-data-science-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Science","Machine Learning","Deep Learning","GenAI","Data Engineering","Data Warehousing","Data Lakes","Lakehouses","Data Pipelines","Cloud-based Data Platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse","Microsoft Fabric","CI/CD","Python","SQL","Relational Databases"],"x-skills-preferred":["Streaming Technologies","Time Series Analysis","NLP Applications","System Modeling","NoSQL Databases","Docker","Kubernetes","Data Visualization Tools","Cloud Certifications","Architecture Certifications"],"datePosted":"2026-03-09T16:55:58.580Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Bavaria, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Science, Machine Learning, Deep Learning, GenAI, Data Engineering, Data Warehousing, Data Lakes, Lakehouses, Data Pipelines, Cloud-based Data Platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse, Microsoft Fabric, CI/CD, Python, SQL, Relational Databases, Streaming Technologies, Time Series Analysis, NLP Applications, System Modeling, NoSQL Databases, Docker, Kubernetes, Data Visualization Tools, Cloud Certifications, Architecture Certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1d522e6-6ca"},"title":"Data Engineering & Data Science Consultant","description":"<p><strong>Data Engineering &amp; Data Science Consultant</strong></p>\n<p>You will work hands-on on the design, build, and operationalisation of modern data and analytics solutions. You will contribute across the full lifecycle – from data ingestion and transformation to analytics, machine learning, and production deployment. You will collaborate closely with data engineers, architects, data scientists, and business stakeholders to deliver scalable, reliable, and value-driven data solutions in complex client environments.</p>\n<p><strong>Your role will include:</strong></p>\n<ul>\n<li>Applying data science and machine learning techniques to real-world business problems</li>\n<li>Working with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>\n<li>Developing and optimising data transformations for analytical and machine learning workloads</li>\n<li>Supporting the productionisation of data and ML solutions, including monitoring and optimisation</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3–5 years of experience in data engineering, data science, or analytics</li>\n<li>Hands-on experience delivering data and analytics solutions in project-based or client environments</li>\n<li>Strong problem-solving skills and a pragmatic, delivery-oriented mindset</li>\n</ul>\n<p><strong>Data Engineering Foundations</strong></p>\n<ul>\n<li>Experience building end-to-end data pipelines (ingestion, transformation, storage)</li>\n<li>Solid understanding of data modelling, data transformations, and feature engineering</li>\n<li>Familiarity with cloud-based data platforms, such as Azure, AWS, or GCP</li>\n<li>Understanding of CI/CD concepts and production-grade deployments</li>\n</ul>\n<p><strong>Applied Data Science &amp; Analytics</strong></p>\n<ul>\n<li>Experience applying statistical analysis and machine learning techniques</li>\n<li>Strong programming skills in Python</li>\n<li>Very good SQL skills and experience working with relational databases</li>\n<li>Experience deploying or supporting ML models in production environments</li>\n<li>Ability to translate analytical results into business-relevant insights</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to have</strong></p>\n<ul>\n<li>Experience with streaming technologies (e.g. Kafka, Azure Event Hubs)</li>\n<li>Exposure to GenAI, NLP, time series, or advanced analytics use cases</li>\n<li>Experience with NoSQL databases (e.g. MongoDB, Cosmos DB)</li>\n<li>Familiarity with Docker and Kubernetes</li>\n<li>Experience with data visualisation tools (e.g. Power BI, Tableau)</li>\n<li>Cloud or data-related certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1d522e6-6ca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/sJqhkr23sMG2F6ppqk2BQn/remote-data-engineering-%26-data-science-consultant-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data science","machine learning","data engineering","data analytics","cloud-based data platforms","Azure","AWS","GCP","Python","SQL","relational databases","data visualisation tools","Power BI","Tableau"],"x-skills-preferred":["streaming technologies","GenAI","NLP","time series","advanced analytics","NoSQL databases","Docker","Kubernetes"],"datePosted":"2026-03-09T16:51:38.126Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data science, machine learning, data engineering, data analytics, cloud-based data platforms, Azure, AWS, GCP, Python, SQL, relational databases, data visualisation tools, Power BI, Tableau, streaming technologies, GenAI, NLP, time series, advanced analytics, NoSQL databases, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56dc9a51-e66"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56dc9a51-e66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["enterprise data architecture","system data integration","data engineering","analytics","modern data architectures","Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","cloud data platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","SQL","relational databases","Postgres","SQL Server","Oracle","NoSQL databases","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","data migration programmes","data pipelines","orchestration","automation","CI/CD concepts","production-grade deployments","distributed systems","Docker","Kubernetes","data management and governance principles","data quality","metadata","lineage","master data management","data management software and tools","security","access control","compliance considerations","Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience"],"x-skills-preferred":["advanced analytics","AI / ML or GenAI","streaming platforms","Kafka","Azure Event Hubs","data governance or metadata tools","cloud","data","architecture certifications"],"datePosted":"2026-03-09T16:51:22.857Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_684c9a64-a14"},"title":"Software Engineer, Associate","description":"<p>Are you interested in building innovative technology that shapes the financial markets? Do you like working at the speed of a startup, and tackling some of the world&#39;s most interesting challenges? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems.</p>\n<p>We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. Our technology empowers millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being.</p>\n<p>Our ETF development team is part of the Aladdin Engineering group. We manage a software platform that oversees the global iShares investment process. Together, we develop cutting-edge technology that transforms the interaction between information, people, and technology for global investment firms.</p>\n<p>As a member of Aladdin Engineering, you will be working in a fast-paced and highly complex environment, collaborating with cross-functional teams in a multi-office, multi-country environment to define, design, and ship high-quality software solutions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, deliver, and maintain applications with focus on high efficiency, high availability, concurrent and fault-tolerant software</li>\n<li>Demonstrate technical leadership of software design &amp; architecture to support strategic product roadmap</li>\n<li>Collaborate with project managers, technical leads, and business analysts to contribute throughout the SDLC cycle</li>\n<li>Manage stakeholders for driving business decisions, negotiating priorities, and partner with various business teams to drive strategy and technology adoption</li>\n<li>Ensure scale, resilience and stability through risk identification and mitigation, quality code reviews, creating robust test suites, and providing level two support</li>\n</ul>\n<p>Skills &amp; Experience:</p>\n<ul>\n<li>Hands-on programming experience in Java and/or Python with OO skills and design patterns</li>\n<li>Exposure to building microservices and APIs ideally with Kafka or gRPC</li>\n<li>Experience working with relational and NoSQL databases (such as SQL Server, Apache Cassandra)</li>\n<li>Experience with DevOps, continuous integration, and continuous deployment (CI/CD) pipelines, and tools like Azure DevOps</li>\n<li>Strong problem-solving, analytical, and software architecture skills</li>\n<li>Experience in partnering with other teams, sponsors, and user groups who are on the same product journey</li>\n<li>Ability to work in Agile/Scrum development environments with strong teamwork, communication, and time management skills</li>\n<li>Innovative and a thought leader around new/cutting-edge technologies</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Exposure to and innovative thinking around AI workflows and code generation</li>\n<li>Experience with financial applications</li>\n<li>Experience with cloud native tools (such as Kubernetes, Docker) and cloud platforms (such as Azure, AWS, or GCP)</li>\n<li>Exposure to high scale distributed technologies such as Kafka, Ignite, Redis</li>\n<li>Experience or real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>B.S. / M.S. college degree in Computer Science, Engineering, or related subject area</li>\n<li>3+ years of hands-on development exposure</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_684c9a64-a14","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/8PKqQ6FiWNCs2s8YbwAy9C/software-engineer%2C-associate-in-london-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Python","OO skills","design patterns","microservices","APIs","Kafka","gRPC","relational databases","NoSQL databases","DevOps","continuous integration","continuous deployment","CI/CD pipelines","Azure DevOps","problem-solving","analytical skills","software architecture","Agile/Scrum development environments","teamwork","communication","time management"],"x-skills-preferred":["AI workflows","code generation","financial applications","cloud native tools","cloud platforms","high scale distributed technologies"],"datePosted":"2026-03-09T16:44:31.460Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Java, Python, OO skills, design patterns, microservices, APIs, Kafka, gRPC, relational databases, NoSQL databases, DevOps, continuous integration, continuous deployment, CI/CD pipelines, Azure DevOps, problem-solving, analytical skills, software architecture, Agile/Scrum development environments, teamwork, communication, time management, AI workflows, code generation, financial applications, cloud native tools, cloud platforms, high scale distributed technologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2305618f-5e7"},"title":"Backend Engineer: Retail Media","description":"<p><strong>About the Job</strong></p>\n<p>Constructor is seeking a Backend Engineer to join our Retail Media team. As a Backend Engineer, you will design, deliver, and maintain web services in close collaboration with other engineers.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Build, deploy, and support services using Python and FastAPI</li>\n<li>Write AWS CloudFormation scripts, Jenkins jobs, and GitHub actions following best industry standards</li>\n<li>Set up service observability, monitoring metrics, and alerting (Prometheus, Grafana, PagerDuty, AWS CloudWatch)</li>\n<li>Implement CI/CD pipelines and separate stability testing</li>\n<li>Collaborate with technical and non-technical business partners to develop and update functionalities</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong computer science background and familiarity with networking principles</li>\n<li>Experience in designing, developing, and maintaining high-load real-time services</li>\n<li>Proficiency in Infrastructure as Code (IaC) tools like CloudFormation or Terraform for managing cloud resources</li>\n<li>Hands-on experience with setting up and improving CI/CD pipelines</li>\n<li>Proficiency in Python</li>\n<li>Experience in server-side coding for web services and a good understanding of API design principles</li>\n<li>Skilled in setting up and managing observability tools like Prometheus, Grafana, and integrating alert systems like PagerDuty</li>\n<li>Familiarity with Service-Oriented Architecture and knowledge of communication protocols like protobuf</li>\n<li>Experience with NoSQL and relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL, ClickHouse/Athena)</li>\n<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP</li>\n<li>Experience collaborating in cross-functional teams</li>\n<li>Excellent English communication skills</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time</li>\n<li>Fully remote team</li>\n<li>Work from home stipend</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget for every employee, refreshed each year</li>\n<li>Maternity and paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>Base salary: $80k-$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options offered in addition to the base salary</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2305618f-5e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/5EBA554B5E","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k-$120k USD","x-skills-required":["Python","FastAPI","AWS CloudFormation","Jenkins","GitHub","Prometheus","Grafana","PagerDuty","AWS CloudWatch","CI/CD pipelines","Infrastructure as Code","NoSQL databases","relational databases","distributed systems","caching solutions"],"x-skills-preferred":["protobuf","Service-Oriented Architecture","communication protocols"],"datePosted":"2026-03-09T10:58:31.600Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, FastAPI, AWS CloudFormation, Jenkins, GitHub, Prometheus, Grafana, PagerDuty, AWS CloudWatch, CI/CD pipelines, Infrastructure as Code, NoSQL databases, relational databases, distributed systems, caching solutions, protobuf, Service-Oriented Architecture, communication protocols","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_46a8c619-ec1"},"title":"Backend Engineer: AI Shopping Agents","description":"<p><strong>About the Job</strong></p>\n<p>Constructor is seeking a Backend Engineer to join its AI Shopping Agents team. The primary focus of this job is to design, deliver &amp; maintain web and data pipeline services in close collaboration with other engineers.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build, deploy, and support backend services</li>\n<li>Define cloud infrastructure using AWS CloudFormation and maintain CI/CD pipelines with GitHub Actions</li>\n<li>Improve and operate our observability stack</li>\n<li>Collaborate with technical and non-technical stakeholders to design, develop, and refine features</li>\n<li>Communicate effectively with stakeholders within and outside the team</li>\n<li>Contribute to data processing pipelines and ETL processes</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong proficiency in Python and server-side web development (API design, concurrency/asynchronous programming)</li>\n<li>Experience designing, building, and operating production backend services (performance, reliability, on-call/operations mindset)</li>\n<li>Experience with Infrastructure as Code and cloud resource management (AWS preferred; Azure/GCP also fine)</li>\n<li>Hands-on experience building or maintaining CI/CD pipelines</li>\n<li>Experience with observability: metrics/logs/traces, dashboards, and alerting</li>\n<li>Experience working with databases, including at least one relational and one NoSQL system (e.g., PostgreSQL, DynamoDB)</li>\n</ul>\n<p><strong>Nice to Haves</strong></p>\n<ul>\n<li>Experience with high-load and/or real-time systems</li>\n<li>Experience with distributed/service-oriented architectures, including interface definition and binary RPC (e.g., Protobuf/gRPC)</li>\n<li>Familiarity with additional vector databases</li>\n<li>Experience contributing to or owning ETL/data pipeline systems at scale</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Work with smart and empathetic people who will help you grow and make a meaningful impact.</li>\n<li>Regular team offsite events to connect and collaborate.</li>\n<li>Fully remote team - choose where you live.</li>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>\n<li>Apple laptops provided for new employees.</li>\n<li>Training and development budget for every employee, refreshed each year.</li>\n<li>Maternity &amp; Paternity leave for qualified employees.</li>\n<li>Base salary: $80k–$120K USD, depending on knowledge, skills, experience, and interview results.</li>\n<li>Stock options - offered in addition to the base salary</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_46a8c619-ec1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/9A5E2DE872","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$80k–$120K USD","x-skills-required":["Python","server-side web development","API design","concurrency/asynchronous programming","Infrastructure as Code","cloud resource management","CI/CD pipelines","observability","metrics/logs/traces","dashboards","alerting","databases","relational databases","NoSQL databases"],"x-skills-preferred":["high-load and/or real-time systems","distributed/service-oriented architectures","interface definition","binary RPC","vector databases","ETL/data pipeline systems"],"datePosted":"2026-03-09T10:58:04.837Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Oregon, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, server-side web development, API design, concurrency/asynchronous programming, Infrastructure as Code, cloud resource management, CI/CD pipelines, observability, metrics/logs/traces, dashboards, alerting, databases, relational databases, NoSQL databases, high-load and/or real-time systems, distributed/service-oriented architectures, interface definition, binary RPC, vector databases, ETL/data pipeline systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7bb50768-9c7"},"title":"Data Scientist","description":"<p><strong>Apply now!</strong></p>\n<p>We are seeking a highly skilled Data Scientist to join our team. As a Data Scientist, you will be responsible for analysing large datasets to gain insights and improve our racing performance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Collect and process large datasets from various sources, including telemetry data, weather forecasts, and track conditions</li>\n<li>Develop and implement machine learning models to predict racing outcomes and identify areas for improvement</li>\n<li>Collaborate with our engineering team to integrate data-driven insights into our racing strategy</li>\n<li>Develop and maintain data visualisation tools to communicate insights to the team</li>\n<li>Stay up-to-date with the latest developments in data science and machine learning</li>\n<li>Work closely with our data engineer to ensure data quality and integrity</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>PhD in Computer Science, Mathematics, or a related field</li>\n<li>Strong programming skills in Python, R, or MATLAB</li>\n<li>Experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch</li>\n<li>Strong understanding of statistical concepts and data visualisation techniques</li>\n<li>Excellent communication and collaboration skills</li>\n<li>Ability to work in a fast-paced environment and meet deadlines</li>\n</ul>\n<p><strong>Preferred Skills:</strong></p>\n<ul>\n<li>Experience with big data technologies such as Hadoop, Spark, or NoSQL databases</li>\n<li>Knowledge of SQL and database design</li>\n<li>Familiarity with cloud-based data platforms such as AWS or Google Cloud</li>\n<li>Experience with data visualisation tools such as Tableau, Power BI, or D3.js</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a professional motorsport team</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Access to state-of-the-art technology and equipment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p><strong>How to Apply:</strong></p>\n<p>If you are a motivated and talented Data Scientist looking for a new challenge, please submit your application, including your CV and a cover letter, to [insert contact email]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7bb50768-9c7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"W Racing Team","sameAs":"https://www.w-racingteam.com","logo":"https://logos.yubhub.co/w-racingteam.com.png"},"x-apply-url":"https://www.w-racingteam.com/manufacturing/careers/stage","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","R","MATLAB","scikit-learn","TensorFlow","PyTorch","SQL","database design"],"x-skills-preferred":["Hadoop","Spark","NoSQL databases","Tableau","Power BI","D3.js"],"datePosted":"2026-03-06T14:27:32.187Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"empty"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"Python, R, MATLAB, scikit-learn, TensorFlow, PyTorch, SQL, database design, Hadoop, Spark, NoSQL databases, Tableau, Power BI, D3.js"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6e2d290-157"},"title":"Software Engineer - II","description":"<p>We&#39;re looking for a talented Software Engineer to join our team. As a Software Engineer, you will be responsible for designing and implementing backend solutions that enhance player experience. You will work closely with product owners and senior engineers to design and implement Java backend services and REST APIs using Java, Spring Boot, and Microservices architecture.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Collaborate with product owners and senior engineers to design and implement backend solutions that enhance player experience</li>\n<li>Design Java backend services and REST APIs using Java, Spring Boot, and Microservices architecture</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or equivalent practical experience</li>\n<li>5+ years of professional experience in backend or Java-based application development</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6e2d290-157","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/212870","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","Microservices architecture","RESTful API design"],"x-skills-preferred":["AWS services","Docker","Kubernetes","SQL databases","NoSQL databases"],"datePosted":"2026-02-20T19:06:13.818Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Spring Boot, Microservices architecture, RESTful API design, AWS services, Docker, Kubernetes, SQL databases, NoSQL databases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0826e52c-87d"},"title":"Software Engineer- II","description":"<p>We&#39;re looking for a talented Software Engineer to join our team. As a Software Engineer, you will be responsible for designing and implementing backend solutions that enhance player experience. You will work closely with product owners and senior engineers to design and implement Java backend services and REST APIs using Java, Spring Boot, and Microservices architecture.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Collaborate with product owners and senior engineers to design and implement backend solutions that enhance player experience</li>\n<li>Design Java backend services and REST APIs using Java, Spring Boot, and Microservices architecture</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or equivalent practical experience</li>\n<li>5+ years of professional experience in backend or Java-based application development</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0826e52c-87d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/212866","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","Microservices architecture","RESTful API design"],"x-skills-preferred":["AWS services","Docker","Kubernetes","SQL databases","NoSQL databases"],"datePosted":"2026-02-20T19:05:38.282Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Spring Boot, Microservices architecture, RESTful API design, AWS services, Docker, Kubernetes, SQL databases, NoSQL databases"}]}