{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/hadoop"},"x-facet":{"type":"skill","slug":"hadoop","display":"Hadoop","count":57},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7275ef33-009"},"title":"Staff Data Engineer","description":"<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>\n<p>Communicating Between Technical and Non-Technical Colleagues</p>\n<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>\n<p>Data Analysis and Synthesis</p>\n<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>\n<p>Data Development Process</p>\n<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>\n<p>Data Innovation</p>\n<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>\n<p>Data Integration Design</p>\n<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>\n<p>Data Modeling</p>\n<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>\n<p>Metadata Management</p>\n<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>\n<p>Problem Resolution</p>\n<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>\n<p>Programming and Build</p>\n<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>\n<p>Technical Understanding</p>\n<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>\n<p>Testing</p>\n<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7275ef33-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976928777","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$114,400 to $171,600","x-skills-required":["Proficiency in programming language such as Python or Java","Experience with Big Data technologies such as Hadoop, Spark, and Kafka","Familiarity with ETL processes and tools","Knowledge of SQL and NoSQL databases","Strong understanding of relational databases","Experience with data warehousing solutions","Proficiency with cloud platforms","Expertise in data modeling and design","Experience in designing and building scalable data pipelines","Experience with RESTful APIs and data integration"],"x-skills-preferred":["Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)","Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field","Strong analytical and communication skills","Ability to work collaboratively in a team environment","High level of accuracy and attention to detail"],"datePosted":"2026-04-18T22:12:56.654Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e8aabc91-c80"},"title":"Assistant Manager of Data Analytics","description":"<p>We are seeking an experienced professional to join our team in Shanghai. As Assistant Manager of Data Analytics, you will focus on using data and analytics to drive business activities and outcomes that improve or transform customer strategy, customer segmentation, predictive models, and marketing campaigns.</p>\n<p>Principal Responsibilities: The role holder will conduct customer strategy analysis focusing on acquisition, activation, retention, conversion, and LTV, and deliver actionable insights. Build and maintain customer segmentation frameworks to support targeted and personalized marketing and operations. Leverage advanced data analytics tools and methodologies to develop, validate, and optimize predictive models, contributing to generate high-quality leads. Analyze customer journey, conversion funnels, and drop-off points to identify bottlenecks and recommend experience improvements. Evaluate the performance of marketing campaigns, membership programs, loyalty initiatives, and promotional strategies by measuring ROI, conversion rate, and engagement metrics. Partner with product, marketing, operations, and customer teams to translate data insights into executable strategies and drive business decisions. Support the business team&#39;s campaign needs, including RM lead generation and manual SMS outreach. Develop and maintain customer-focused dashboards, KPIs, and reporting systems.</p>\n<p>To be successful in the role, you should meet the following requirements: Minimum of 5 years&#39; experience in one or multiple skills in data/business analytics in the financial or digital domains. Demonstrated experience in process and analysis of large amounts of data using one of these: Python, R, SQL, or SAS; on environments such as AWS, Google Cloud, or Hadoop. Knowledge and experience in AI, big data, machine learning, or predictive algorithms, statistics modeling, and data mining. Excellent communication and teamwork skills, able to collaborate effectively with different departments and stakeholders. Strong problem-solving skills and innovative thinking, able to translate complex business problems into data analytics solutions. Proven experience in one or more of: customer segmentation, digital marketing, data science, portfolio analytics, use of open-source data in analyses. Good English communication skills, able to collaborate effectively with domestic and international teams.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e8aabc91-c80","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC International Wealth and Premier Banking","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610677890","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","R","SQL","SAS","AWS","Google Cloud","Hadoop","AI","big data","machine learning","predictive algorithms","statistics modeling","data mining"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:33.642Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Shanghai"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Python, R, SQL, SAS, AWS, Google Cloud, Hadoop, AI, big data, machine learning, predictive algorithms, statistics modeling, data mining"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5fa8591-cb8"},"title":"Solutions Architect: Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5fa8591-cb8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8353757002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:24.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bb1344c-662"},"title":"Sr. Solutions Engineer, Retail - CPG","description":"<p>We are looking for a Senior Solutions Engineer to join our team. As a Senior Solutions Engineer, you will work with large enterprises in the Retail and CPG space to help them become more data-driven. You will define and direct the technical strategy for our largest and most important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>\n<p>You will work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives. You will also work with a team of engineers to build proofs of concept and demonstrate our products.</p>\n<p>The ideal candidate will have a strong background in value selling, technical account management, and technical leadership. They will also have a solid understanding of big data, data science, and cloud technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define and direct the technical strategy for our largest and most important accounts</li>\n<li>Work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives</li>\n<li>Collaborate with a team of engineers to build proofs of concept and demonstrate our products</li>\n<li>Provide technical guidance and support to customers</li>\n<li>Work with customers to identify and address technical issues</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience working with large enterprises in the Retail and CPG space</li>\n<li>3+ years of experience in a pre-sales capacity or supporting sales activity</li>\n<li>Strong background in value selling, technical account management, and technical leadership</li>\n<li>Solid understanding of big data, data science, and cloud technologies</li>\n<li>Experience with design and implementation of big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Production programming experience in Python, R, Scala, or Java</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bb1344c-662","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7507778002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["big data","data science","cloud technologies","Hadoop","NoSQL","MPP","OLTP","OLAP","Python","R","Scala","Java"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:57:56.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud technologies, Hadoop, NoSQL, MPP, OLTP, OLAP, Python, R, Scala, Java, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2f6f807-fc6"},"title":"Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>We are looking for a software engineer to join our team as a founding member of our Belgrade site. As a software engineer, you will be involved in the entire development cycle and exemplify all core Databricks values.</p>\n<p>The responsibilities you will have:</p>\n<ul>\n<li>Drive requirements clarity and design decisions for ambiguous problems</li>\n<li>Produce technical design documents and project plans</li>\n<li>Develop new features</li>\n<li>Mentor more junior engineers</li>\n<li>Test and rollout to production, monitoring</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science or equivalent practical experience in databases or distributed systems</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Motivated by delivering customer value and impact</li>\n<li>3+ years of production level experience in either Java, Scala or C++</li>\n<li>Solid foundation in algorithms and data structures and their real-world use cases</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop)</li>\n</ul>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2f6f807-fc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012691002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22bcbb50-ef4"},"title":"Member of Technical Staff - Data Platform","description":"<p><strong>About the Role</strong></p>\n<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>\n<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>\n<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>\n<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>\n<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>\n<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>\n<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>\n<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>\n<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>\n<li>Strong debugging, profiling, and performance optimisation skills.</li>\n<li>Track record of shipping and maintaining critical infrastructure.</li>\n<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22bcbb50-ef4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.x.com/","logo":"https://logos.yubhub.co/x.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4803862007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Rust","Go","Scala","Kafka","Flink","Spark","Trino","Hadoop","distributed systems","stream processing","large-scale data platforms"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:30.705Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8871a994-591"},"title":"Machine Learning Engineer, Core Engineering","description":"<p>We&#39;re seeking a talented Machine Learning Engineer to join our Core Engineering team. As a Machine Learning Engineer at Pinterest, you will build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest. You will partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces, while gaining knowledge of how ML works in different areas.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest</li>\n<li>Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas</li>\n<li>Use data-driven methods and leverage the unique properties of our data to improve candidate retrieval</li>\n<li>Work in a high-impact environment with quick experimentation and product launches</li>\n<li>Keep up with industry trends in recommendation systems</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>2+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)</li>\n<li>End-to-end hands-on experience with building data processing pipelines, large-scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)</li>\n<li>Degree in computer science, machine learning, statistics, or related field</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>M.S. or PhD in Machine Learning or related areas</li>\n<li>Publications at top ML conferences</li>\n<li>Experience using Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring</li>\n<li>Familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration</li>\n<li>Expertise in scalable real-time systems that process stream data</li>\n<li>Passion for applied ML and the Pinterest product</li>\n</ul>\n<p>Relocation Statement:</p>\n<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8871a994-591","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/6121450","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$138,905-$285,982 USD","x-skills-required":["machine learning","deep learning","data processing pipelines","large-scale machine learning systems","big data technologies","Hadoop","Spark","natural language processing","reinforcement learning","graph representation learning"],"x-skills-preferred":["Cursor","Copilot","Codex","LLM-powered productivity tools","scalable real-time systems","stream data"],"datePosted":"2026-04-18T15:57:30.186Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, deep learning, data processing pipelines, large-scale machine learning systems, big data technologies, Hadoop, Spark, natural language processing, reinforcement learning, graph representation learning, Cursor, Copilot, Codex, LLM-powered productivity tools, scalable real-time systems, stream data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":138905,"maxValue":285982,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cb18189c-d78"},"title":"Solutions Architect (Pre-sales) - Kansai Region","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>\n<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>\n<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cb18189c-d78","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437028002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","Scala","Java","R","Public Cloud","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:24.678Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd67fe82-1c8"},"title":"Solutions Architect : Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd67fe82-1c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8346277002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data technologies","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.281Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data technologies, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8317ba42-502"},"title":"Senior Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>\n<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n</ul>\n<ul>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n</ul>\n<ul>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n</ul>\n<ul>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n</ul>\n<ul>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n</ul>\n<ul>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n</ul>\n<ul>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n</ul>\n<ul>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n</ul>\n<ul>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n</ul>\n<ul>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>\n</ul>\n<ul>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n</ul>\n<ul>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n</ul>\n<ul>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n</ul>\n<ul>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n</ul>\n<ul>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n</ul>\n<ul>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n</ul>\n<ul>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8317ba42-502","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8041698002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:55:32.901Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b29d013-412"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. As a senior software engineer on the Runtime team, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Some example projects include: Apache Spark: Develop the de facto open source standard framework for big data. Data Plane Storage: Provide reliable and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming. Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines. Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning free, scalable, and robust.</p>\n<p>We look for: BS (or higher) in Computer Science, related technical field or equivalent practical experience. Comfortable working towards a multi-year vision with incremental deliverables. Motivated by delivering customer value and impact. 5+ years of production level experience in either Java, Scala or C++. Strong foundation in algorithms and data structures and their real-world use cases. Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</p>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Local Pay Range $166,000-$225,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b29d013-412","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/4513122002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:01.767Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bbbd3f3a-5fe"},"title":"Solutions Architect (Pre-sales) - Digital Native","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the Digital Native field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our The Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem. You&#39;ll grow as a leader in your field, while finding solutions to our customers&#39; biggest challenges in big data, analytics, data engineering and data science problems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others.</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese and English</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bbbd3f3a-5fe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437026002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Public Cloud","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra"],"x-skills-preferred":["Python","Scala","Java","R"],"datePosted":"2026-04-18T15:54:50.098Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Public Cloud, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1d94b9cf-773"},"title":"Machine Learning Intern Fall 2026 (Toronto)","description":"<p>About the Role</p>\n<p>We&#39;re looking for a Machine Learning Intern to join our team in Toronto. As a Machine Learning Intern, you will work on tackling new challenges in machine learning and artificial intelligence. You will join our engineering teams as we maneuver through exponential growth and massive scale while building awesome products and features, creating visually rich experiences, spearheading the discovery problem, and pinpointing tomorrow&#39;s engineering challenges.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead your own project start to finish to contribute in cutting-edge research in machine learning and artificial intelligence that can be applied to Pinterest problems</li>\n<li>Collect, analyze, and synthesize findings from data and build intelligent data-driven models</li>\n<li>Write clean, efficient, and sustainable code</li>\n<li>Use machine learning, natural language processing, and graph analysis to solve modeling and ranking problems across discovery, ads and search</li>\n<li>Scope and independently solve moderately complex problems</li>\n<li>Demonstrate accountability for the quality and completion of your tasks and projects, collaborating with your team and seeking guidance as needed</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Working towards a Master&#39;s or PhD degree in Computer Science, ML, NLP, Statistics, Information Sciences or related field</li>\n<li>Machine Learning (ranking, computer vision, NLP, content recommendations, embedding, information retrieval etc)</li>\n<li>Experience with big data technologies (e.g., Hadoop/Spark) and scalable realtime systems that process stream data</li>\n<li>Strong interest in research and applying machine learning and AI to drive meaningful product innovation and user impact</li>\n<li>Exposure to ML, AI, data analytics, statistics, or related technical fields, through research, coursework, projects, or internships</li>\n<li>Proficiency in at least one systems language (Java, C++, Python) or one ML framework (Tensorflow, Pytorch, MLFlow)</li>\n<li>Experience in research and in solving analytical problems</li>\n<li>Strong communicator and team player with the ability to find solutions for open-ended problems</li>\n</ul>\n<p>Why Intern at Pinterest?</p>\n<ul>\n<li>Meaningful Work: Contribute to projects that impact millions of users worldwide.</li>\n<li>Mentorship: Learn from and be guided by experienced engineers and researchers in the field.</li>\n<li>Growth and Development: Participate in professional development workshops and networking events to build your skills and connections.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1d94b9cf-773","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7268778","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"internship","x-salary-range":"$6,000 - $9,500 CAD monthly","x-skills-required":["Machine Learning","Artificial Intelligence","Python","Java","C++","Hadoop","Spark","Tensorflow","Pytorch","MLFlow","Natural Language Processing","Graph Analysis"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:24.814Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, ON, CA"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Artificial Intelligence, Python, Java, C++, Hadoop, Spark, Tensorflow, Pytorch, MLFlow, Natural Language Processing, Graph Analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":6000,"maxValue":9500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a78c8753-f89"},"title":"Staff Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>\n<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Below are some example projects:</p>\n<ul>\n<li>Apache Spark: Develop the de facto open source standard framework for big data.</li>\n<li>Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming.</li>\n<li>Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines.</li>\n<li>Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science, related technical field or equivalent practical experience.</li>\n<li>Optional: MS or PhD in databases, distributed systems.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Driven by delivering customer value and impact.</li>\n<li>8+ years of production-level experience in either Java, Scala, or C++.</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a78c8753-f89","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544364002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:03.334Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_601c2dc5-462"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. We are a customer-obsessed company that leaps at every opportunity to solve technical challenges.</p>\n<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Some example projects include:</p>\n<ul>\n<li>Developing the de facto open source standard framework for big data, Apache Spark.</li>\n<li>Providing reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>\n<li>Building the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>\n</ul>\n<p>We look for candidates with a strong foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems, and a BS (or higher) in Computer Science or a related technical field.</p>\n<p>The pay range for this role is $166,000-$225,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_601c2dc5-462","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544325002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Hadoop","Distributed systems","Databases","Big data systems"],"x-skills-preferred":["Algorithms","Data structures","Real-world use cases","Cloud storage backends","Query optimizer","Execution engine"],"datePosted":"2026-04-18T15:53:54.425Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Hadoop, Distributed systems, Databases, Big data systems, Algorithms, Data structures, Real-world use cases, Cloud storage backends, Query optimizer, Execution engine","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_34a04ec5-ae9"},"title":"Machine Learning Engineer II","description":"<p>We&#39;re looking for a Machine Learning Engineer II to join our Growth Platform engineering group. As a Machine Learning Engineer II, you will be responsible for developing and implementing ML models to improve user targeting and personalization for growth initiatives. You will design and build scalable ML pipelines for data processing, model training, and deployment. You will collaborate with cross-functional teams to identify potential ML solutions for growth opportunities. You will conduct A/B tests to evaluate the performance of ML models and optimize their impact on key growth metrics. You will analyze large datasets to extract insights and inform decision-making for user acquisition and retention strategies. You will contribute to the development of our ML infrastructure, ensuring it can support rapid experimentation and deployment. You will stay up-to-date with the latest advancements in ML and recommend new techniques to enhance our growth efforts. You will participate in code reviews and collaborate with team members as needed. You will thoughtfully leverage AI tools to speed up design, coding, debugging, and documentation, while applying your own critical thinking to validate outputs and explain how you used AI in your workflow. You will shape our AI-assisted engineering practices by sharing patterns, guardrails, and learnings with the team so we can safely increase our impact without compromising code quality, reliability, or candidate expectations.</p>\n<p>To be successful in this role, you will need to have 3+ years of experience applying ML to real-world problems, preferably in a growth or user acquisition context. You will need to have excellent communication skills and the ability to work effectively in cross-functional teams. You will need to have strong problem-solving skills and the ability to translate business requirements into technical solutions. You will need to have strong programming skills in Python and experience with PyTorch. You will need to have proficiency in data processing and analysis using tools like SQL, Spark, or Hadoop. You will need to have experience with recommendation systems, user modeling, or personalization algorithms. You will need to have familiarity with statistical analysis. You will need to have experience using AI coding assistants and agentic tools as a force-multiplier, and equally comfortable solving problems from first principles when those tools aren’t available. You will need to have a Bachelor’s/Master’s degree in a relevant field or equivalent experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_34a04ec5-ae9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7681666","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","PyTorch","SQL","Spark","Hadoop","Recommendation systems","User modeling","Personalization algorithms","Statistical analysis","AI coding assistants"],"x-skills-preferred":["Natural Language Processing","Data visualization","Cloud platforms","Containerization technologies"],"datePosted":"2026-04-18T15:52:32.389Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, IE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, SQL, Spark, Hadoop, Recommendation systems, User modeling, Personalization algorithms, Statistical analysis, AI coding assistants, Natural Language Processing, Data visualization, Cloud platforms, Containerization technologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d1a7c541-3a1"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>We are seeking a senior software engineer to join our team in Belgrade. As a founding member of our Belgrade site, you will be involved in the entire development cycle and exemplify all core Databricks values. Your responsibilities will include driving requirements clarity and design decisions for ambiguous problems, producing technical design documents and project plans, developing new features, mentoring more junior engineers, testing and rolling out to production, and monitoring.</p>\n<p>To be successful in this role, you will need a BS in Computer Science or equivalent practical experience in databases or distributed systems, comfort working towards a multi-year vision with incremental deliverables, motivation by delivering customer value and impact, and 5+ years of production-level experience in either Java, Scala, or C++. You should also have a solid foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop), and a strong understanding of software engineering principles and practices.</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p>Our commitment to diversity and inclusion is a key part of our culture, and we take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d1a7c541-3a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012800002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:08.194Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b5c8fceb-189"},"title":"Data Scientist","description":"<p><strong>About the role</strong></p>\n<p>We&#39;re looking for a Data Scientist to join our team at Stripe, where you&#39;ll work closely with our Product, Finance, Payments, Security, Risk, Growth and Go-to-Market teams.</p>\n<p>As a Data Scientist at Stripe, you&#39;ll play a crucial role in optimizing our systems and leveraging data to make strategic business decisions. You&#39;ll work with a variety of data science roles and teams across Stripe, and will be aligned to the most relevant team based on your background.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Work closely with a specific part of the business to optimize our systems and leverage data to make strategic business decisions</li>\n<li>Use techniques like machine learning, statistical modeling, causal inference, optimization, experimentation, and all forms of analytics to ensure that the company strategy, products, and user interactions make smart use of our rich data</li>\n<li>Partner deeply with teams across Stripe to ensure that our users, our products, and our business have the models, data products, and insights needed to make decisions and grow responsibly</li>\n</ul>\n<p><strong>Who you are</strong></p>\n<p>We&#39;re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>\n<p><strong>Minimum Requirements</strong></p>\n<ul>\n<li>PhD + 3 years, MS/MA + 6 years or BS/BA 8 years of data science/quantitative modeling experience</li>\n<li>Proficiency in SQL and a computing language such as Python or R</li>\n<li>Strong knowledge and hands-on experience in several of the following areas: machine learning, statistics, optimization, product analytics, causal inference, and/or experimentation</li>\n<li>Experience in working with cross-functional teams to deliver results</li>\n<li>Ability to communicate results clearly and a focus on driving impact</li>\n<li>A demonstrated ability to manage and deliver on multiple projects with a high attention to detail</li>\n<li>Solid business acumen and experience in synthesizing complex analyses into actionable recommendations</li>\n<li>A builder&#39;s mindset with a willingness to question assumptions and conventional wisdom</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Experience deploying models in production and adjusting model thresholds to improve performance</li>\n<li>Experience designing, running, and analyzing complex experiments or leveraging causal inference designs</li>\n<li>Experience with distributed tools such as Spark, Hadoop, etc.</li>\n<li>A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b5c8fceb-189","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/5601879","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","R","Machine Learning","Statistics","Optimization","Product Analytics","Causal Inference","Experimentation"],"x-skills-preferred":["Distributed Tools","Spark","Hadoop","PhD","MS","Quantitative Field"],"datePosted":"2026-04-18T15:50:05.184Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"N/A"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, R, Machine Learning, Statistics, Optimization, Product Analytics, Causal Inference, Experimentation, Distributed Tools, Spark, Hadoop, PhD, MS, Quantitative Field"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6aab7ed8-23a"},"title":"Senior Software Engineer - Data","description":"<p>We are seeking an experienced Senior Software Engineer (Data) to join our fast-paced, collaborative data team. In this role, you will have broad authority to drive the direction of our technographic data services, building world-class data pipelines and systems to process billions of signals and data points.</p>\n<p>This is an exciting opportunity to solve challenging problems and make a big impact as we invest in making technographics a first-class offering.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build and optimize big data pipelines to extract and process signals from the web, job postings, and other sources</li>\n<li>Design and implement data architectures and storage solutions to efficiently handle massive data volumes</li>\n<li>Collaborate closely with data scientists to support and integrate ML models into data workflows</li>\n<li>Continuously improve data quality, performance, and scalability of our technographic data platform</li>\n<li>Drive technical strategy and roadmap for the data processing infrastructure</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Extensive experience building and scaling big data pipelines and architectures from scratch</li>\n<li>Deep expertise in big data frameworks (Hadoop, Spark) and the JVM stack (Java, Scala)</li>\n<li>Strong software engineering fundamentals and ability to write efficient, high-quality code</li>\n<li>Experience with entity recognition and NLP techniques a plus</li>\n<li>Proven track record delivering results and driving projects in a fast-paced environment</li>\n<li>Excellent collaboration and communication skills to work with data scientists, analysts and product teams</li>\n<li>Passion for leveraging huge datasets to power valuable insights</li>\n</ul>\n<p>Ideal Background:</p>\n<ul>\n<li>8+ years of experience in software engineering roles</li>\n<li>Experience working with very large datasets and distributed systems</li>\n<li>Familiarity building data pipelines at large tech companies or data-driven organisations</li>\n<li>Bachelor&#39;s or advanced degree in Computer Science, Engineering or related technical field</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6aab7ed8-23a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8486808002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000-$220,000 USD","x-skills-required":["big data pipelines","data architectures","storage solutions","ML models","data quality","performance","scalability","data processing infrastructure","Hadoop","Spark","Java","Scala","entity recognition","NLP techniques"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:24.766Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bethesda, Maryland, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data pipelines, data architectures, storage solutions, ML models, data quality, performance, scalability, data processing infrastructure, Hadoop, Spark, Java, Scala, entity recognition, NLP techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b47cf70c-31a"},"title":"Director,Technical Solutions (Big Data/ AI)","description":"<p><strong>Job Description</strong></p>\n<p>The Director of Data &amp; AI Support Engineering - Bangalore will lead and grow a regional team of Data &amp; AI technical experts in India, focused on providing resiliency and smooth operation of customer production workloads.</p>\n<p>This leader will oversee support operations during APJ and EMEA business hours with close alignment with other global teams to ensure 24x7 support coverage through coordination with other regions.</p>\n<p>The team resolves complex and long-running data engineering use cases raised by Databricks customers to support the success of live use cases - which includes performance optimization, ensuring resiliency of production jobs, helping customers stabilize workloads on new products and features, and more.</p>\n<p>Reporting to the Global Lead of Frontline Support Engineering - Data &amp; AI, you will be able to understand the real-world business problems our customers are solving with data and are committed to helping them achieve reliability and performance of their systems to meet their goals.</p>\n<p><strong>The Impact You Will Have:</strong></p>\n<ul>\n<li>Serve as the India site leader for an elite team of Data &amp; AI specialists that can provide coverage of customers across EMEA &amp; APJ business hours.</li>\n</ul>\n<ul>\n<li>Grow the technical expertise of the team to support successful adoption of new products and features of the Databricks platform for customer production workloads.</li>\n</ul>\n<ul>\n<li>Engage with top customers to understand how to support their business needs with their Data &amp; AI strategy, in collaboration with field engineering and sales when required.</li>\n</ul>\n<ul>\n<li>Partner with internal product engineering teams to make Databricks products better and more supportable.</li>\n</ul>\n<ul>\n<li>Understand how to maintain high reliability of the Databricks platform to successfully achieve customer business goals.</li>\n</ul>\n<p><strong>Competencies &amp; Requirements:</strong></p>\n<ul>\n<li>Proven people leadership experience: at least 6+ years as a manager of managers.</li>\n</ul>\n<ul>\n<li>18+ years in the IT industry, with a strong background in Software Engineering with specialization in Data Engineering, ideally with Big Data &amp; Cloud experience.</li>\n</ul>\n<ul>\n<li>Experience leading large teams (100+ employees) in engineering, technical support, or consulting. Support experience is not required - but customer facing experience is highly desirable.</li>\n</ul>\n<ul>\n<li>Hands-on experience in at least two of the following at production scale:</li>\n</ul>\n<ul>\n<li>Big Data (Spark, Hadoop, Kafka)</li>\n</ul>\n<ul>\n<li>Machine Learning / Artificial Intelligence projects</li>\n</ul>\n<ul>\n<li>Data Science / Streaming use cases</li>\n</ul>\n<ul>\n<li>Spark expertise is a big advantage.</li>\n</ul>\n<ul>\n<li>Strong background in customer-facing support leadership roles.</li>\n</ul>\n<ul>\n<li>Excellent troubleshooting skills across distributed systems.</li>\n</ul>\n<ul>\n<li>Strong ownership mindset with ability to thrive in a fast-paced, startup-like environment with evolving needs.</li>\n</ul>\n<ul>\n<li>Bachelor’s/Master’s in Computer Science or equivalent technical field.</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p><strong>Our Commitment to Diversity and Inclusion:</strong></p>\n<p>We are committed to fostering an inclusive culture where everyone feels valued, respected, and empowered to contribute their best work.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b47cf70c-31a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8409447002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data","Machine Learning","Artificial Intelligence","Data Science","Streaming use cases","Spark","Hadoop","Kafka"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:07.660Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data, Machine Learning, Artificial Intelligence, Data Science, Streaming use cases, Spark, Hadoop, Kafka"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fbd26168-f11"},"title":"Data Scientist","description":"<p><strong>About the role</strong></p>\n<p>We&#39;re looking for a Data Scientist to join our team at Stripe. As a Data Scientist, you will work closely with our Product, Finance, Payments, Security, Risk, Growth and Go-to-Market teams to optimize our systems and leverage data to make strategic business decisions.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Partner with cross-functional teams to ensure that our users, products, and business have the models, data products, and insights needed to make decisions and grow responsibly.</li>\n<li>Analyze data, build machine learning and statistical models, and run experiments to drive impact.</li>\n<li>Influence how our products work, how our business works, and how our go-to-market motions operate.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>PhD + 3 years, MS/MA + 6 years or BS/BA + 8 years of data science/quantitative modeling experience.</li>\n<li>Proficiency in SQL and a computing language such as Python or R.</li>\n<li>Strong knowledge and hands-on experience in several of the following areas: machine learning, statistics, optimization, product analytics, causal inference, and/or experimentation.</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Experience deploying models in production and adjusting model thresholds to improve performance.</li>\n<li>Experience designing, running, and analyzing complex experiments or leveraging causal inference designs.</li>\n<li>Experience with distributed tools such as Spark, Hadoop, etc.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fbd26168-f11","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/5895430","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","R","Machine Learning","Statistics","Optimization","Product Analytics","Causal Inference","Experimentation"],"x-skills-preferred":["Distributed Tools","Spark","Hadoop"],"datePosted":"2026-04-18T15:48:35.112Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, R, Machine Learning, Statistics, Optimization, Product Analytics, Causal Inference, Experimentation, Distributed Tools, Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5a5a8459-f04"},"title":"Engineering Manager of Managers, Data Platform","description":"<p>Job Description:</p>\n<p><strong>Who we are</strong></p>\n<p>Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities.</p>\n<p><strong>About the team</strong></p>\n<p>The Big Data Infrastructure organization is a globally distributed team of approximately 40 engineers spread across Dublin, Bangalore, Seattle, and San Francisco. This team is the backbone of the company’s data ecosystem, responsible for building, scaling, and maintaining the highly reliable platforms that power data storage, orchestration, and processing at scale.</p>\n<p>As the Head of Big Data Infra, you will lead a global, ~40-person engineering organization responsible for the foundational data platforms that drive the business. Reporting directly to the Head of Compute, you will define the strategic vision and roadmap for the company&#39;s data lake, orchestration pipelines, and batch computing environments.</p>\n<p>The team&#39;s technical portfolio spans four core domains:</p>\n<ul>\n<li>Datalake (Storage): Managing scalable cloud storage and metadata layers, leveraging Amazon S3, Apache Iceberg (metastore and integrations), SAL, and Hive Metastore (HMS).</li>\n</ul>\n<ul>\n<li>Data Orchestration: Ensuring robust pipeline execution and scheduling using Apache Airflow.</li>\n</ul>\n<ul>\n<li>Batch Compute Infra (Data Store): Maintaining foundational data infrastructure and legacy systems, including Hadoop.</li>\n</ul>\n<ul>\n<li>Batch Compute Experience (Data Processing): Optimizing and delivering powerful data processing environments utilizing Apache Spark and Apache Celeborn.</li>\n</ul>\n<p><strong>What you’ll do</strong></p>\n<p>You will move beyond day-to-day management to act as an industry leader, effectively advocating for your organization&#39;s mission and impact. You will be expected to see problems others don&#39;t and rally people to independently create solutions.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Set Strategic Vision: Define the scope, vision, and goals for your organization with little or no guidance. You will anticipate industry trends to influence Stripe&#39;s long-range plans and set direction on a multi-year timeframe.</li>\n</ul>\n<ul>\n<li>Lead at Scale: Manage the achievement of and accountability for broad swaths of programs. You will establish wide-ranging and scaled processes, anticipating and removing roadblocks across multiple teams.</li>\n</ul>\n<ul>\n<li>Drive Operational Excellence: Instill a culture of rigorous thinking and meticulous craftsmanship. You will ensure your organization drives constant improvement in team processes and maintains high standards of operational rigor.</li>\n</ul>\n<ul>\n<li>Indirect Influence: Use indirect influence to steer other teams toward making the right decisions for Stripe. You will effectively communicate your team&#39;s plan and how it links to Stripe&#39;s company vision to cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>Obsess Over Talent: Proactively invest in the development of the organization and its people at all levels. You will recruit world-class talent and coach your direct reports,who are themselves managers - to elevate the skills of the leadership team.</li>\n</ul>\n<ul>\n<li>Stewardship &amp; Culture: Act as an ambassador and advocate for Stripe, modeling ownership for all other Stripes. You will actively work to increase Stripe&#39;s inclusivity and diversity and use our operating principles to guide decision-making.</li>\n</ul>\n<p><strong>Who you are</strong></p>\n<p>We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>\n<p><strong>Minimum requirements</strong></p>\n<ul>\n<li>Bachelor’s degree or equivalent practical experience with minimum 5 years of experience with software development.</li>\n</ul>\n<ul>\n<li>Minimum 5 years of experience in a technical leadership role; overseeing strategic projects.</li>\n</ul>\n<ul>\n<li>Minimum 3 years of Manager of Managers experience (managing other engineering managers).</li>\n</ul>\n<ul>\n<li>Experience building diverse teams to tackle challenging technical problems.</li>\n</ul>\n<ul>\n<li>Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts.</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Strategic Ambiguity: Proven ability to translate chaos into clarity and navigate complex, high-impact work where you must define your own scope.</li>\n</ul>\n<ul>\n<li>Infrastructure at Scale: Successfully shipped and operated critical infrastructure with significant responsibility over funds or critical data.</li>\n</ul>\n<ul>\n<li>Cross-Functional Influence: A track record of getting other teams on board with your vision to support execution in a way that benefits the broader company.</li>\n</ul>\n<ul>\n<li>Curiosity: You enjoy learning and diving into the nuts-and-bolts of how things work (e.g., global money movement rails, currency conversion, or inter-company flows).</li>\n</ul>\n<ul>\n<li>Humility and Adaptability: You are humble and self-aware, with a history of adapting your management approach across different environments and seeking feedback to grow as a leader.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5a5a8459-f04","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7747391","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Strategic vision","Technical leadership","Project management","Team management","Communication","Problem-solving","Infrastructure at scale","Cross-functional influence","Curiosity","Humility and adaptability"],"x-skills-preferred":["Apache Iceberg","Apache Airflow","Apache Spark","Apache Celeborn","Amazon S3","Hive Metastore","SAL","Cloud storage","Metadata layers","Data orchestration","Batch computing infrastructure","Legacy systems","Hadoop","Global money movement rails","Currency conversion","Inter-company flows"],"datePosted":"2026-04-18T15:47:47.234Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strategic vision, Technical leadership, Project management, Team management, Communication, Problem-solving, Infrastructure at scale, Cross-functional influence, Curiosity, Humility and adaptability, Apache Iceberg, Apache Airflow, Apache Spark, Apache Celeborn, Amazon S3, Hive Metastore, SAL, Cloud storage, Metadata layers, Data orchestration, Batch computing infrastructure, Legacy systems, Hadoop, Global money movement rails, Currency conversion, Inter-company flows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3480e0e8-2e9"},"title":"Senior Data Scientist, Ads","description":"<p>We are looking for a highly motivated and experienced Senior Data Scientist to join our growing Ads Data Science team. As a Senior Data Scientist, you will play a key role in developing as well as applying cutting-edge DS models/methods to improve the adoption and performance of our advertising platform through data-driven insights.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and apply DS solutions to inform improvements in advertiser experience and Reddit&#39;s ad platform</li>\n<li>Analyze large-scale datasets to identify trends, patterns, and insights that can be used to improve the effectiveness of our advertising platform</li>\n<li>Collaborate with product managers and engineers to define product requirements and translate them into data science solutions</li>\n<li>Develop ML models &amp; DS methods to improve anomaly detection, prediction, &amp; pattern recognition</li>\n<li>Communicate findings and recommendations to stakeholders across the organization</li>\n<li>Stay up-to-date on the latest advancements in machine learning and data science</li>\n<li>Mentor and guide junior data scientists on the team</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Advanced degree (Masters or Ph.D.) in a quantitative field such as: Statistics, Mathematics, Physics, Economics, or Operations Research</li>\n<li>For M.S. holders: 5+ years of industry experience in applied science or data science roles</li>\n<li>For Ph.D. holders: 4+ years of industry experience in applied science or data science roles</li>\n<li>Platform experience and a deep understanding of the ads ecosystem</li>\n<li>Strong understanding of statistical modeling, machine learning algorithms, causal inference and experimental design</li>\n<li>Experience with large-scale data processing and analysis using tools such as Spark, Hadoop, or Hive; knowledge of BigQuery a plus</li>\n<li>Proficiency in Python or R and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch</li>\n<li>Experience with SQL and relational databases</li>\n<li>Excellent communication and presentation skills</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience with online advertising and ad tech</li>\n<li>Experience with causal inference and A/B testing</li>\n<li>Contributions to open-source projects or publications in relevant conferences or journals</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>\n<li>401k with Employer Match</li>\n<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>\n<li>Family Planning Support</li>\n<li>Gender-Affirming Care</li>\n<li>Mental Health &amp; Coaching Benefits</li>\n<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>\n<li>Generous Paid Parental Leave</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3480e0e8-2e9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/6042236","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$267,100 USD","x-skills-required":["Python","R","Spark","Hadoop","BigQuery","scikit-learn","TensorFlow","PyTorch","SQL","relational databases","statistical modeling","machine learning algorithms","causal inference","experimental design"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:41.569Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, R, Spark, Hadoop, BigQuery, scikit-learn, TensorFlow, PyTorch, SQL, relational databases, statistical modeling, machine learning algorithms, causal inference, experimental design","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":267100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aa0976d9-f71"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>We are seeking a senior software engineer to join our Runtime team at Databricks. As a member of this team, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Some example projects you will be working on include:</p>\n<ul>\n<li>Developing the de facto open source standard framework for big data, Apache Spark</li>\n<li>Providing reliable and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store</li>\n<li>Building the next generation query optimizer and execution engine that&#39;s fast, tuning free, scalable, and robust</li>\n</ul>\n<p>We look for candidates with a strong foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop), and 5+ years of production level experience in either Java, Scala or C++.</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit our website.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aa0976d9-f71","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6936994002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$157,700-$213,800 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Hadoop","Distributed systems","Data structures","Algorithms"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:19.533Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Hadoop, Distributed systems, Data structures, Algorithms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":157700,"maxValue":213800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe0d53c0-05e"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play a critical role during this journey. The DSA works across a small number of our largest or highest potential key accounts, collaborating across Databricks teams to accelerate the adoption and growth of the Databricks platform.</p>\n<p>As a DSA, you will help ensure customer success by driving focus and technical accountability to our most complex customers who need guidance to accelerate consumption on Databricks workloads that they have already selected. This is a hybrid technical and commercial role. It is commercial in the sense that you will be required to own and drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, owning executive relationships and creating and driving plans and strategies for Databricks colleagues to execute upon.</p>\n<p>This is in parallel to being technical, with expectations being that you become at least Level 200 across all Databricks products/workloads and that you become the Use Case-specific technical lead post Technical Win. You will bring strong executive relationship management skills and high levels of technical credibility to effectively engage and communicate at all levels with an organization, in particular with a track record of building strong relationships with the customers&#39; executives and C-suite, elevating the conversation, and helping them realize the value of Databricks.</p>\n<p>You will report directly to a Director, Field Engineering, as part of your Business Unit&#39;s Technical GM organization. You will play a key role in establishing the fundamental assets and best practices within the DSA team, mentoring other DSAs and wider account team members within your region, helping them develop personally, professionally and to further their careers.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers.</li>\n<li>Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts.</li>\n<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>\n<li>Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organizations.</li>\n<li>Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts (Specialist Solution Architects and Product Specialists) to execute on the right tasks that are beyond your scope of activities or expertise.</li>\n<li>Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals.</li>\n<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs (DBR, E2 and Unity Catalog).</li>\n<li>Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA to cover the below workstreams:</li>\n</ul>\n<ul>\n<li>Key use cases moving from &#39;win&#39; to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of LH vision)</li>\n<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimization</li>\n<li>Executive and operational governance</li>\n<li>Proactively provide internal and external updates</li>\n<li>KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression to your Technical GM</li>\n<li>Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe0d53c0-05e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8482406002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Engineering technologies (e.g. Spark, Hadoop, Kafka)","Data Warehousing (e.g. SQL, OLTP/OLAP/DSS)","Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO)","Executive disciplinary management","Influencing and leading teams","Strategic Management Consulting","Building and steering to a value case","Quota ownership, achievement and track record of great performance against objective target","Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:45.267Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seoul, South Korea"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering technologies (e.g. Spark, Hadoop, Kafka), Data Warehousing (e.g. SQL, OLTP/OLAP/DSS), Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO), Executive disciplinary management, Influencing and leading teams, Strategic Management Consulting, Building and steering to a value case, Quota ownership, achievement and track record of great performance against objective target, Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_80b94e35-0f3"},"title":"Staff Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Staff Technical Solutions Engineer with over 12+ years of experience to join our Platform Support team. This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 12 years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimisation.</li>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_80b94e35-0f3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7845334002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:45:36.598Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_12b7c011-a90"},"title":"Staff Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>\n<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialised SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Below are some example projects:</p>\n<ul>\n<li>Apache Spark: Develop the de facto open source standard framework for big data.</li>\n<li>Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming.</li>\n<li>Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines.</li>\n<li>Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science, related technical field or equivalent practical experience.</li>\n<li>Optional: MS or PhD in databases, distributed systems.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Driven by delivering customer value and impact.</li>\n<li>8+ years of production-level experience in either Java, Scala, or C++.</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_12b7c011-a90","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/5646855002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:34.255Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a317d234-6b0"},"title":"Data Scientist, Ads","description":"<p>We are looking for a highly motivated and experienced Data Scientist to join our growing Ads Data Science team. As a Data Scientist, you will play a key role in developing as well as applying cutting-edge DS models/methods to improve our understanding of the dynamics that drive the success of our advertising platform, and identify opportunities to accelerate that success.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Analyze large-scale datasets to identify trends, patterns, and insights that can be used to improve the effectiveness of our advertising platform</li>\n<li>Develop ML models &amp; DS methods to for improved anomaly detection, prediction, pattern recognition</li>\n<li>Communicate findings and recommendations to stakeholders across the organization</li>\n<li>Collaborate with product, engineering, sales, and marketing partners to define product and program requirements and translate them into data science solutions</li>\n<li>Stay up-to-date on the latest advancements in machine learning and data science</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Advanced degree (Masters or Ph.D.) in a quantitative field such as: Statistics, Mathematics, Physics, Economics, or Operations Research</li>\n<li>For M.S. holders: 3+ years of industry experience in applied science or data science roles</li>\n<li>For Ph.D. holders: 2+ years of industry experience in applied science or data science roles</li>\n<li>Strong understanding of statistical modeling, machine learning algorithms, causal inference and experimental design</li>\n<li>Experience with large-scale data processing and analysis using tools such as Spark, Hadoop, or Hive; knowledge of BigQuery a plus</li>\n<li>Proficiency in Python or R and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch</li>\n<li>Experience with SQL and relational databases</li>\n<li>Excellent communication and presentation skills</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience with online advertising and ad tech</li>\n<li>Experience with causal inference and A/B testing</li>\n<li>Contributions to open-source projects or publications in relevant conferences or journals</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>\n<li>Family Planning Support</li>\n<li>Gender-Affirming Care</li>\n<li>Mental Health &amp; Coaching Benefits</li>\n<li>Comprehensive Medical Benefits &amp; Health Care Spending Account</li>\n<li>Registered Retirement Savings Plan with matching contributions</li>\n<li>Income Replacement Programs</li>\n<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>\n<li>Generous Paid Parental Leave</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a317d234-6b0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7607124","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["statistical modeling","machine learning algorithms","causal inference","experimental design","large-scale data processing","Spark","Hadoop","BigQuery","Python","R","scikit-learn","TensorFlow","PyTorch","SQL","relational databases"],"x-skills-preferred":["online advertising","ad tech","A/B testing","open-source projects","publications"],"datePosted":"2026-04-18T15:45:22.663Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - British Columbia, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"statistical modeling, machine learning algorithms, causal inference, experimental design, large-scale data processing, Spark, Hadoop, BigQuery, Python, R, scikit-learn, TensorFlow, PyTorch, SQL, relational databases, online advertising, ad tech, A/B testing, open-source projects, publications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6d7f1a0-882"},"title":"Resident Solutions Architect - Mumbai","description":"<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>\n<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>\n<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>\n<li>Providing expert-level technical guidance and support to customers during the implementation process</li>\n<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>\n<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>\n<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>\n<li>Comfortable writing code in Python, PySpark, and Scala</li>\n<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>\n<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>\n<li>Expertise in Azure</li>\n<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>\n<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>\n<li>Ability to work with Partner Organization and deliver complex programs</li>\n<li>Ability to lead large technical delivery teams</li>\n<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>\n<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>\n<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>\n<li>Willingness to travel for onsite customer engagements within India</li>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<p>Good-to-have Skills:</p>\n<ul>\n<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>\n<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>\n<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>\n<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>\n<li>Expertise on cloud platforms like AWS and GCP</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6d7f1a0-882","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8107166002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Data Lakes","Python","PySpark","Scala","SQL","Spark SQL","Spark-streaming","Azure","NoSQL databases","data governance","security","Unity Catalog","RBAC"],"x-skills-preferred":["ML libraries/frameworks","MLOps tools and processes","LLM and GenAI solutions","Hadoop","Trino","Ranger","AWS","GCP"],"datePosted":"2026-04-18T15:45:04.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mumbai, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9d5cd6ee-fd5"},"title":"ソリューションアーキテクト (プリセールス)","description":"<p>As a Pre-Sales Solution Architect (Analytics, AI, Big Data, Public Cloud), you will lead hands-on technical evaluation phases throughout the sales process. You will become a technical advisor to the sales team, collaborating with product teams to deliver customer requirements in the field. You will assist customers in achieving specific data-driven outcomes using the Databricks Data Intelligence Platform, support data teams in completing projects, and enable them to integrate our platform into their enterprise ecosystem.</p>\n<p>While identifying solutions to customers&#39; biggest challenges in big data, analytics, data engineering, and data science, you will work with the sales team to propose solutions to customers. You report to the Field Engineering Manager.</p>\n<p>Your impact:</p>\n<ul>\n<li>You will become an expert in big data analysis in the architecture and design aspect.</li>\n<li>You will guide potential customers to adoption of Databricks.</li>\n<li>You will support customers by creating reference architectures, high-level deployment and migration plans, and demo applications.</li>\n<li>You will integrate Databricks and third-party applications to support customer architectures.</li>\n<li>You will lead workshops, seminars, and meetups to engage with the technical community.</li>\n<li>You will build successful relationships with clients in your assigned segment, providing both technical and business value.</li>\n<li>You will discover customer use cases,</li>\n<li>Optimize existing customers&#39; Databricks environments,</li>\n<li>Engage in activities to improve Databricks products.</li>\n</ul>\n<p>Requirements/Experience:</p>\n<ul>\n<li>Experience working with external clients in various industries as a pre-sales or post-sales professional.</li>\n<li>Experience demonstrating technical concepts, presenting, and using whiteboards.</li>\n<li>Experience designing and implementing architecture in public cloud (AWS, Azure, GCP).</li>\n<li>Experience working with enterprise accounts.</li>\n<li>Fluent communication in Japanese and business-level writing in English.</li>\n<li>Basic understanding of data domain or machine learning/AI domain, and SQL coding skills.</li>\n</ul>\n<p>Desirable Experience/Skills:</p>\n<ul>\n<li>Understanding of a core strength in data engineering or data science in a customer-facing pre-sales or consulting role.</li>\n<li>Experience with big data technologies such as Apache Spark, AI, data science, Hadoop, Cassandra.</li>\n<li>Experience coding in Python, Scala, Java, or R using Apache Spark.</li>\n<li>Development experience in data domain or machine learning/AI domain, particularly in Python (Pandas, NumPy, etc.).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9d5cd6ee-fd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437000002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","Scala","Java","R","SQL"],"x-skills-preferred":["Pandas","NumPy","Machine Learning","Cloud Computing"],"datePosted":"2026-04-18T15:44:47.621Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, SQL, Pandas, NumPy, Machine Learning, Cloud Computing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_114d6e6c-0d0"},"title":"Staff Software Engineer (L4)","description":"<p>We&#39;re shaping the future of communications at Twilio, delivering innovative solutions to hundreds of thousands of businesses and empowering millions of developers worldwide to craft personalized customer experiences.</p>\n<p>Join the team as our next Staff Software Engineer in the Enterprise AI Engineering team. Twilio is undergoing a major business transformation powered by Enterprise AI, supported by a dedicated engineering team building the foundations for a unified, secure, and scalable operating system across GTM functions (Sales, Support, Operations, etc.) as well as Internal non-GTM functions (Finance, HR, Legal, etc.).</p>\n<p>In this role, you&#39;ll co-lead the design and development of our software infrastructure, driving technical vision and strategy to ensure scalability, reliability, and performance. You will oversee the integration of complex React-based front-ends with backend modular services, ensuring a seamless UI experience.</p>\n<p>As a Staff Software Engineer within Enterprise AI, you are the technical heartbeat of our products. Your role is to bridge the gap between bleeding-edge AI research and robust, full-stack production systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Co-lead the design and development of our software infrastructure, driving technical vision and strategy to ensure scalability, reliability, and performance.</li>\n<li>Drive the development of sophisticated, stateful web applications.</li>\n<li>Serve as developer leader in distributed systems, data technologies, with strong software engineering skills.</li>\n<li>Drive technical innovation and research to stay at the forefront of emerging data technologies and best practices.</li>\n<li>Mentor and elevate a team of high-performing engineers.</li>\n<li>Collaborate closely with cross-functional teams to understand business requirements and translate them into scalable and efficient technical solutions.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or a related field.</li>\n<li>8+ years of experience in data engineering, software development, or a related field, with at least 3 years in a technical leadership role.</li>\n<li>Experience with full-stack development building web apps, using modern programming languages such as JavaScript, Typescript or React.</li>\n<li>Proven track record of architecting and delivering complex data projects at scale, with a deep understanding of data infrastructure and distributed systems.</li>\n<li>Strong understanding of data modeling, data warehousing, and ETL processes, with experience designing and optimizing data pipelines.</li>\n<li>Excellent communication and collaboration skills, with the ability to influence technical decisions and drive alignment across teams.</li>\n<li>Strong leadership skills, with a track record of mentoring and developing high-performing engineering teams.</li>\n<li>Demonstrated ability to thrive in a fast-paced, dynamic environment and deliver results under tight timelines.</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Experience developing production-quality LLM applications and using modern agent frameworks such as Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI, and/or others is a plus.</li>\n<li>Expertise in big data technologies such as Hadoop, Spark, Kafka, and cloud-based data services (AWS/GCP/Azure).</li>\n</ul>\n<p>Travel:</p>\n<p>This role will be remote and based in Colombia. Travel may be required to participate in project or team in-person meetings.</p>\n<p>What We Offer:</p>\n<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>\n<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>\n<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_114d6e6c-0d0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7716279","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["full-stack development","JavaScript","Typescript","React","data engineering","software development","distributed systems","data technologies","strong software engineering skills","technical innovation","research","emerging data technologies","best practices","mentorship","team leadership","communication","collaboration","influence","alignment","leadership skills","mentoring","high-performing engineering teams","fast-paced","dynamic environment","results under tight timelines"],"x-skills-preferred":["LLM applications","modern agent frameworks","Langchain","Langgraph","Llamaindex","LangSmith","LangFuse","CrewAI","big data technologies","Hadoop","Spark","Kafka","cloud-based data services","AWS","GCP","Azure"],"datePosted":"2026-04-18T15:42:12.273Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Colombia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"full-stack development, JavaScript, Typescript, React, data engineering, software development, distributed systems, data technologies, strong software engineering skills, technical innovation, research, emerging data technologies, best practices, mentorship, team leadership, communication, collaboration, influence, alignment, leadership skills, mentoring, high-performing engineering teams, fast-paced, dynamic environment, results under tight timelines, LLM applications, modern agent frameworks, Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI, big data technologies, Hadoop, Spark, Kafka, cloud-based data services, AWS, GCP, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9b657c4e-8a1"},"title":"Member of Technical Staff - Data Platform","description":"<p><strong>About the Role</strong></p>\n<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute. You will take ownership of infrastructure components that process trillions of events daily, driving the scalability, performance, and reliability of the systems that power product and ML workloads across the company.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>\n<li>Scale and optimize multi-tenant Kafka infrastructure supporting real-time workloads.</li>\n<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>\n<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>\n<li>Debug and optimize distributed systems, with a focus on reliability and performance under load.</li>\n<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>\n<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>\n<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>\n<li>Strong debugging, profiling, and performance optimization skills.</li>\n<li>Track record of shipping and maintaining critical infrastructure.</li>\n<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9b657c4e-8a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.x.ai/","logo":"https://logos.yubhub.co/x.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4803862007","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["distributed systems","stream processing","large-scale data platforms","Rust","Go","Scala","Kafka","Flink","Spark","Trino","Hadoop","debugging","profiling","performance optimization"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:03.394Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, stream processing, large-scale data platforms, Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, debugging, profiling, performance optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd7fb909-289"},"title":"Web Crawling Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are looking for a skilled and motivated Web Crawling Engineer to join our dynamic engineering team. The ideal candidate should have a solid background in distributed web crawling, scraping and data extraction, with experience using advanced tools and technologies to collect and process large-scale data from diverse web sources at large scale.</p>\n<p>Responsibilities</p>\n<p>As a Web crawling engineer, you will be responsible for:</p>\n<ul>\n<li>Developing and maintaining web crawlers using Go to extract data from target websites.</li>\n<li>Utilizing headless browsing techniques, such as Chrome DevTools, to automate and optimize data collection processes.</li>\n<li>Collaborating with cross-functional teams to identify, scrape, and integrate data from APIs and web pages to support business objectives.</li>\n<li>Creating and implementing efficient parsing patterns using tokenizers, regular expressions, XPaths, and CSS selectors to ensure accurate data extraction.</li>\n<li>Designing and managing distributed job queues using technologies such as Redis, Aerospike and Kubernetes to handle large-scale distributed crawling and processing tasks.</li>\n<li>Developing strategies to monitor and ensure data quality, accuracy, and integrity throughout the crawling and indexing process.</li>\n<li>Continuously improving and optimizing existing web crawling infrastructure to maximize efficiency and adapt to new challenges.</li>\n</ul>\n<p>About You</p>\n<p>Core programming and web technologies</p>\n<ul>\n<li>Proficiency in Go (Golang)/Rust/Zig for building scalable and efficient web crawlers.</li>\n<li>Deep understanding of TCP, UDP, TLS and HTTP/1.1,2,3 protocols and web communication.</li>\n<li>Knowledge of HTML, CSS, and JavaScript for parsing and navigating web content.</li>\n<li>Familiarity with cloud platforms (AWS, GCP), orchestration (Kubernetes, Nomad), and containerization (Docker) for deployment.</li>\n</ul>\n<p>Data Structures &amp; Algorithms</p>\n<ul>\n<li>Mastery of queues, stacks, hash maps, and other data structures for efficient data handling.</li>\n<li>Ability to design and optimize algorithms for large-scale web crawling.</li>\n</ul>\n<p>Web Scraping &amp; Data Acquisition</p>\n<ul>\n<li>Hands-on experience with networking and web scraping libraries.</li>\n<li>Understanding of how search engines work and best practices for web crawling optimization.</li>\n</ul>\n<p>Databases &amp; Data Storage</p>\n<ul>\n<li>Experience with SQL and/or NoSQL databases (knowing Aerospike is a bonus) for storing and managing crawled data.</li>\n<li>Familiarity with data warehousing and scalable storage solutions.</li>\n</ul>\n<p>Distributed Systems &amp; Big Data</p>\n<ul>\n<li>Knowledge of distributed systems (e.g., Hadoop, Spark) for processing large datasets.</li>\n</ul>\n<p>Bonus Skills (Nice-to-Have)</p>\n<ul>\n<li>Experience with web archiving projects &amp; tooling, open-source archiving is a big plus!</li>\n<li>Experience applying Machine Learning to improve crawling efficiency or accuracy.</li>\n<li>Experience with low-level networking programming and/or userspace TCP/IP stacks.</li>\n</ul>\n<p>Hiring Process</p>\n<p>Here is what you should expect:</p>\n<ul>\n<li>Introduction call - 35 min</li>\n<li>Hiring Manager Interview - 30 min</li>\n<li>Live-coding Interview - 45 min</li>\n<li>System Design Interview - 45 min</li>\n<li>Deep dive interview (optional) - 60min</li>\n<li>Culture-fit discussion - 30 min</li>\n<li>Reference checks</li>\n</ul>\n<p>Additional Information</p>\n<p>Location &amp; Remote</p>\n<p>This role is primarily based in one of our European offices , Paris, France and London, UK. We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In any case, we ask all new hires to visit our Paris HQ office:</p>\n<ul>\n<li>for the first week of their onboarding (accommodation and travelling covered)</li>\n<li>then at least 2 days per month</li>\n</ul>\n<p>What we offer</p>\n<p>💰 Competitive salary and equity</p>\n<p>🧑‍⚕️ Health insurance</p>\n<p>🚴 Transportation allowance</p>\n<p>🥎 Sport allowance</p>\n<p>🥕 Meal vouchers</p>\n<p>💰 Private pension plan</p>\n<p>🍼 Parental : Generous parental leave policy</p>\n<p>🌎 Visa sponsorship</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd7fb909-289","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/c96bf665-7d73-406b-8d8f-ddf8df5d160f","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Rust","Zig","TCP","UDP","TLS","HTTP/1.1","HTTP/2","HTTP/3","HTML","CSS","JavaScript","cloud platforms","orchestration","containerization","queues","stacks","hash maps","SQL","NoSQL databases","data warehousing","scalable storage solutions","distributed systems","Hadoop","Spark"],"x-skills-preferred":["web archiving projects","Machine Learning","low-level networking programming","userspace TCP/IP stacks"],"datePosted":"2026-04-17T12:48:06.790Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Rust, Zig, TCP, UDP, TLS, HTTP/1.1, HTTP/2, HTTP/3, HTML, CSS, JavaScript, cloud platforms, orchestration, containerization, queues, stacks, hash maps, SQL, NoSQL databases, data warehousing, scalable storage solutions, distributed systems, Hadoop, Spark, web archiving projects, Machine Learning, low-level networking programming, userspace TCP/IP stacks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51fb35f8-ae2"},"title":"Data Engineer","description":"<p>We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining large-scale data systems and pipelines. You will work closely with cross-functional teams to ensure seamless integration with existing systems and to drive business growth through data-driven insights.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop scalable data architectures using cloud-based technologies such as AWS and Azure</li>\n<li>Develop and maintain ETL processes to extract, transform, and load data from various sources</li>\n<li>Collaborate with data scientists to develop and deploy machine learning models</li>\n<li>Ensure data quality, security, and compliance with regulatory requirements</li>\n<li>Work with stakeholders to identify business needs and develop data solutions</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field</li>\n<li>3+ years of experience in data engineering or a related field</li>\n<li>Strong understanding of data architecture, design patterns, and best practices</li>\n<li>Experience with cloud-based technologies such as AWS and Azure</li>\n<li>Proficiency in programming languages such as Python, Java, or C++</li>\n<li>Excellent problem-solving skills and attention to detail</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science, Engineering, or related field</li>\n<li>Experience with big data technologies such as Hadoop, Spark, or NoSQL databases</li>\n<li>Familiarity with data visualization tools such as Tableau, Power BI, or D3.js</li>\n<li>Certification in data engineering or a related field</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading technology business</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51fb35f8-ae2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Williams Advanced Engineering","sameAs":"https://www.williamsadvancedengineering.com/","logo":"https://logos.yubhub.co/williamsadvancedengineering.com.png"},"x-apply-url":"https://careers.williamsf1.com/job/trackside-operations-lead-hospitality-in-london-jid-494","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Azure","Python","Java","C++","ETL","data architecture","data design patterns","data quality","data security","regulatory compliance"],"x-skills-preferred":["Hadoop","Spark","NoSQL databases","Tableau","Power BI","D3.js"],"datePosted":"2026-03-12T12:01:28.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Grove"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, Azure, Python, Java, C++, ETL, data architecture, data design patterns, data quality, data security, regulatory compliance, Hadoop, Spark, NoSQL databases, Tableau, Power BI, D3.js"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_da3bf72e-353"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a data engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>What you&#39;ll bring.</strong></p>\n<ul>\n<li>You&#39;ll have a background in hands-on technical development, with at least 18 months&#39; of industry experience in a data engineering role or equivalent, and preferably some software industry experience.</li>\n<li>Proficiency in Scala, java, python, or a programming language associated with data engineering. Our primary language is Scala, but don&#39;t worry if that&#39;s not currently your strongest language. We believe that strong engineering principles are universal and transferable.</li>\n<li>As an expert in building and deploying production level data processing batch systems, you&#39;ll share an appreciation of what makes a high quality, operationally stable system and how to streamline all areas of development, release, and operations to achieve this.</li>\n<li>Experience with a variety of modern development tooling (e.g. Git, Gradle, Nexus) and technologies supporting automation and DevOps (e.g. Jenkins, Docker and a little bit of good old Bash scripting). You&#39;ll be familiar with developing within a version-controlled process that regularly makes use of these tools and technologies.</li>\n<li>A strong technical communication ability with demonstrable experience of working in rapidly changing client environments.</li>\n<li>Knowledge of testing libraries of common programming languages (such as ScalaTest or equivalent). Importantly, you&#39;ll know the difference between varying test types (unit test, integration test) and can cite specific examples of what they have written themselves.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p><strong>Our perks and quirks.</strong></p>\n<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>\n<ul>\n<li>Competitive salary</li>\n<li>Company bonus</li>\n<li>Annual leave, plus national holidays + your birthday off!</li>\n<li>Regularly bench-marked salary rates</li>\n<li>Well-being days</li>\n<li>Volunteer Day off</li>\n<li>Work from Home Equipment</li>\n<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Spend up to 2 months working outside of your country of employment over a rolling 12-month period with our &#39;Work from Anywhere&#39; policy</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_da3bf72e-353","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/jUWNyFSzoRT8M2oK3WR2cQ/hybrid-data-engineer-in-kuala-lumpur-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Git","Gradle","Nexus","Jenkins","Docker","Bash scripting"],"x-skills-preferred":["Scala","Java","Python"],"datePosted":"2026-03-09T17:06:37.471Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Kuala Lumpur"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting, Scala, Java, Python"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_90297dff-291"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a data engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>What you&#39;ll bring.</strong></p>\n<ul>\n<li>You&#39;ll have a background in hands-on technical development, with at least 18 months&#39; of industry experience in a data engineering role or equivalent, and preferably some software industry experience.</li>\n<li>Proficiency in Scala, java, python, or a programming language associated with data engineering. Our primary language is Scala, but don&#39;t worry if that&#39;s not currently your strongest language. We believe that strong engineering principles are universal and transferable.</li>\n<li>As an expert in building and deploying production level data processing batch systems, you&#39;ll share an appreciation of what makes a high quality, operationally stable system and how to streamline all areas of development, release, and operations to achieve this.</li>\n<li>Experience with a variety of modern development tooling (e.g. Git, Gradle, Nexus) and technologies supporting automation and DevOps (e.g. Jenkins, Docker and a little bit of good old Bash scripting). You&#39;ll be familiar with developing within a version-controlled process that regularly makes use of these tools and technologies.</li>\n<li>A strong technical communication ability with demonstrable experience of working in rapidly changing client environments.</li>\n<li>Knowledge of testing libraries of common programming languages (such as ScalaTest or equivalent). Importantly, you&#39;ll know the difference between varying test types (unit test, integration test) and can cite specific examples of what they have written themselves.</li>\n<li>Due to the nature of our client projects, candidates are required to be native or fluent in either French or Dutch.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p><strong>Our perks and quirks.</strong></p>\n<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>\n<p><strong>We offer:</strong></p>\n<ul>\n<li>Competitive salary</li>\n<li>Company bonus</li>\n<li>20 days annual leave (if you worked the previous year January – December), 12 compensation days, plus national holidays + your birthday off!</li>\n<li>Pension scheme</li>\n<li>Private Healthcare with DKV</li>\n<li>Death in Service and Income Protection</li>\n<li>Work from Home Allowance</li>\n<li>Eco Vouchers</li>\n<li>Meal Vouchers</li>\n<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>\n<li>Continuous Training and Development, including access to Udemy Business</li>\n<li>Spend up to 2 months working outside of your country of employment over a rolling 12-month period with our ‘Work from Anywhere’ policy</li>\n<li>Employee Referral Program</li>\n<li>Team Social Budget &amp; Company-wide Socials</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_90297dff-291","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/bU9LVK3n4PCQuGu6MtoceK/hybrid-data-engineer-in-brussels-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Git","Gradle","Nexus","Jenkins","Docker","Bash scripting","ScalaTest"],"x-skills-preferred":[],"datePosted":"2026-03-09T17:02:53.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brussels"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting, ScalaTest"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c54b4db0-c3a"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a Data Engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c54b4db0-c3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/eBP5YPZrqR6AJqma3gpwhQ/hybrid-data-engineer-in-tokyo-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Google Cloud","Microsoft Azure","Amazon"],"x-skills-preferred":["Git","Gradle","Nexus","Jenkins","Docker","Bash scripting"],"datePosted":"2026-03-09T17:02:29.472Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Google Cloud, Microsoft Azure, Amazon, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_783eb1af-88c"},"title":"Principal Software Engineer","description":"<p>We are seeking a highly skilled and experienced Principal Software Engineer to join our dynamic team. The ideal candidate will have a solid background in data engineering and data analytics, with a proven track record of designing and implementing scalable data solutions.</p>\n<p>As a Principal Software Engineer, you will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions.</p>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>\n<li>Develop and optimize data models to support data analytics.</li>\n<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</li>\n<li>Implement data validation frameworks and monitoring systems to detect and resolve data quality issues.</li>\n<li>Troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery.</li>\n<li>Work with a security-first mindset, focusing on system scalability and maintainability.</li>\n<li>Coach and mentor peers and emerging team members while advocating for best practices.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor&#39;s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>6+ years of experience in software engineering, with a focus on data engineering and data analytics.</li>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n<li>Familiarity with cloud platforms and data services.</li>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n<li>Familiarity with Azure.</li>\n<li>Experience with machine learning and data science tools and frameworks.</li>\n<li>Knowledge of data visualization tools (e.g., Tableau, Power BI).</li>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_783eb1af-88c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft Advertising","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-36/","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Hadoop","SQL","RDBMS","Key Value stores","cloud platforms","data services","Azure","machine learning","data science tools","data visualization tools","containerization","orchestration"],"x-skills-preferred":["data engineering","data analytics","data processing frameworks","data validation frameworks","data monitoring systems","security-first mindset","system scalability","maintainability","mentorship","best practices"],"datePosted":"2026-03-08T22:20:44.602Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores, cloud platforms, data services, Azure, machine learning, data science tools, data visualization tools, containerization, orchestration, data engineering, data analytics, data processing frameworks, data validation frameworks, data monitoring systems, security-first mindset, system scalability, maintainability, mentorship, best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a19b1aa-62a"},"title":"Principal Software Engineer","description":"<p>We are seeking a highly skilled and experienced Principal Software Engineer to join our dynamic team. The ideal candidate will have a solid background in data engineering and data analytics, with a proven track record of designing and implementing scalable data solutions. As a Principal Software Engineer, you will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions.</p>\n<p><strong>Responsibilities</strong></p>\n<p>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</p>\n<p>Develop and optimize data models to support data analytics.</p>\n<p>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</p>\n<p>Implement data validation frameworks and monitoring systems to detect and resolve data quality issues.</p>\n<p>Troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery.</p>\n<p>Work with a security-first mindset, focusing on system scalability and maintainability.</p>\n<p>Coach and mentor peers and emerging team members while advocating for best practices.</p>\n<p><strong>Qualifications</strong></p>\n<p>Required Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor&#39;s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p>6+ years of experience in software engineering, with a focus on data engineering and data analytics.</p>\n<p>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</p>\n<p>Expertise in SQL and experience with RDBMS, Key Value stores.</p>\n<p>Familiarity with cloud platforms and data services.</p>\n<p>Excellent problem-solving skills and the ability to work independently and as part of a team.</p>\n<p>Solid communication skills.</p>\n<p>Familiarity with Azure.</p>\n<p>Experience with machine learning and data science tools and frameworks.</p>\n<p>Knowledge of data visualization tools (e.g., Tableau, Power BI).</p>\n<p>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</p>\n<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: <a href=\"https://careers.microsoft.com/us/en/us-corporate-pay\">https://careers.microsoft.com/us/en/us-corporate-pay</a></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a19b1aa-62a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft Advertising","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-35/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Hadoop","SQL","RDBMS","Key Value stores","Cloud platforms","Data services","Azure","Machine learning","Data science tools and frameworks","Data visualization tools","Containerization","Orchestration"],"x-skills-preferred":["Master's Degree in Computer Science or related technical field","8+ years technical engineering experience","12+ years technical engineering experience"],"datePosted":"2026-03-08T22:20:08.137Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores, Cloud platforms, Data services, Azure, Machine learning, Data science tools and frameworks, Data visualization tools, Containerization, Orchestration, Master's Degree in Computer Science or related technical field, 8+ years technical engineering experience, 12+ years technical engineering experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_df9a4b26-709"},"title":"Senior Software Engineer","description":"<p>The Ads Data Platform Team, part of Microsoft AI, is hiring a Senior Software Engineer. This role is available in Redmond, WA. Our team powers the backbone of Microsoft’s global ads marketplace—gathering, storing, and enriching over half a trillion ad-serving events every day. We build data platforms that fuel business analytics, machine learning models, and real-time reporting at massive scale.</p>\n<p>As part of our team, you’ll:</p>\n<p>Design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines.\nBuild data applications that directly impact Microsoft Ads’ double-digit annual growth.\nWork on cutting-edge technologies in distributed systems, machine learning, and big data.</p>\n<p><strong>Responsibilities</strong></p>\n<p>Work with BingAds stakeholders to determine requirements for new features to drive up Ads business.\nCreate system design for feature requirements.\nAssure system meets security and compliance requirements and expectations.\nCreates a clear and articulated plan for testing and assuring quality solutions.\nImplement the features with high efficiency, extensibility, diagnosability, reliability, and maintainability with few defects.\nReviews code of product to assure it meets the team’s and Microsoft’s quality standards, is reliable and accurate, and is appropriate for the scale of the product feature.\nMaintain operations of live service as issues arise on a rotational, on-call basis.\nIdentifies solutions and mitigations to simple and complex issues and escalates as necessary.\nActs as a Designated Responsible Individual (DRI) working on call to monitor system/product feature/service for degradation, downtime, or interruptions.\nResponds within Service Level Agreement (SLA) timeframe.\nEscalate issues to appropriate owners.\nBuild knowledge, share new ideas, and share pinpoints of engineering tool gaps to improve software developer tools to support other programs, tools, and applications to create, debug, and maintain code for product features.\nContribute to the development of automation within production and deployment of a product feature.\nProfile and analyze distributed system performance and capacity bottlenecks.\nPropose and implement solutions to improve system latency and capacity to meet BingAds online serving requirements.</p>\n<p><strong>Qualifications</strong></p>\n<p>Required Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field</li>\n<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master’s Degree in Computer Science or related technical field</li>\n<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>\n<li>Experience in Azure</li>\n<li>Experience in Machine learning and online system design, implementation and qualification</li>\n<li>2+ years’ experience in Distributed Systems and Big Data Technologies such as Spark, Hadoop, HDFS, Kafka, Flink, Scala</li>\n</ul>\n<p>#MicrosoftAI #BingAds Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_df9a4b26-709","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-92/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Azure","Machine learning","Distributed Systems","Big Data Technologies"],"x-skills-preferred":["Spark","Hadoop","HDFS","Kafka","Flink","Scala"],"datePosted":"2026-03-08T22:17:03.136Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Azure, Machine learning, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9afb309b-13e"},"title":"Principal Software Engineer","description":"<p>You will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions. As a Principal Software Engineer, you will collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. You will develop and optimize data models to support data analytics, utilize advanced analytics techniques to extract insights from large datasets, and drive data-driven decision making. You will also implement data validation frameworks and monitoring systems to detect and resolve data quality issues, troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery. Additionally, you will work with a security-first mindset, focusing on system scalability and maintainability, and coach and mentor peers and emerging team members while advocating for best practices.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>\n<li>Develop and optimize data models to support data analytics.</li>\n<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</li>\n<li>Implement data validation frameworks and monitoring systems to detect and resolve data quality issues.</li>\n<li>Troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery.</li>\n<li>Work with a security-first mindset, focusing on system scalability and maintainability.</li>\n<li>Coach and mentor peers and emerging team members while advocating for best practices.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>\n<li>6+ years of experience in software engineering, with a focus on data engineering and data analytics.</li>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n<li>Familiarity with cloud platforms and data services.</li>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n<li>Familiarity with Azure.</li>\n<li>Experience with machine learning and data science tools and frameworks.</li>\n<li>Knowledge of data visualization tools (e.g., Tableau, Power BI).</li>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor&#39;s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>8+ years of experience in software engineering, with a focus on data engineering and data analytics.</li>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n<li>Familiarity with cloud platforms and data services.</li>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n<li>Familiarity with Azure.</li>\n<li>Experience with machine learning and data science tools and frameworks.</li>\n<li>Knowledge of data visualization tools (e.g., Tableau, Power BI).</li>\n<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<p>Salary Range:</p>\n<ul>\n<li>The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Microsoft is an equal opportunity employer.</li>\n<li>All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances.</li>\n<li>If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.</li>\n</ul>\n<p>Similar Jobs:</p>\n<ul>\n<li>Sr Account Executive(Advertising) Beijing, China</li>\n<li>Advertising Account Management</li>\n<li>Principal Software Engineer Bengaluru, India</li>\n<li>Software Engineering</li>\n<li>Member of Technical Staff, AI Product, Android Engineer Mountain View, US</li>\n<li>Software Engineering</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9afb309b-13e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft Advertising","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-37/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Hadoop","SQL","RDBMS","Key Value stores","Cloud platforms","Data services","Machine learning","Data science","Tableau","Power BI","Docker","Kubernetes"],"x-skills-preferred":["Master's Degree in Computer Science or related technical field","8+ years technical engineering experience","Expertise in SQL and experience with RDBMS, Key Value stores","Familiarity with cloud platforms and data services","Excellent problem-solving skills","Solid communication skills","Familiarity with Azure","Experience with machine learning and data science tools and frameworks","Knowledge of data visualization tools (e.g., Tableau, Power BI)","Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)"],"datePosted":"2026-03-08T22:14:59.101Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores, Cloud platforms, Data services, Machine learning, Data science, Tableau, Power BI, Docker, Kubernetes, Master's Degree in Computer Science or related technical field, 8+ years technical engineering experience, Expertise in SQL and experience with RDBMS, Key Value stores, Familiarity with cloud platforms and data services, Excellent problem-solving skills, Solid communication skills, Familiarity with Azure, Experience with machine learning and data science tools and frameworks, Knowledge of data visualization tools (e.g., Tableau, Power BI), Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6e6d3e44-db8"},"title":"Data Scientist","description":"<p>We are seeking a highly skilled Data Scientist to join our team. As a Data Scientist, you will play a key role in analysing and interpreting complex data to inform business decisions and drive performance improvement.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Work closely with the data engineering team to design and implement data pipelines and data warehousing solutions</li>\n<li>Develop and maintain data visualisation tools and reports to support business decision-making</li>\n<li>Collaborate with cross-functional teams to identify business opportunities and develop data-driven solutions</li>\n<li>Conduct statistical analysis and machine learning modelling to inform business decisions</li>\n<li>Develop and maintain data quality and governance processes to ensure data accuracy and integrity</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in a quantitative field such as mathematics, statistics, or computer science</li>\n<li>Proven experience in data analysis and machine learning</li>\n<li>Strong programming skills in languages such as Python or R</li>\n<li>Experience with data visualisation tools such as Tableau or Power BI</li>\n<li>Excellent communication and interpersonal skills</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading Formula One team</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development and training opportunities</li>\n<li>Access to state-of-the-art technology and tools</li>\n<li>Flexible working hours and remote working options</li>\n<li>Annual bonus scheme</li>\n<li>25 days&#39; annual leave</li>\n<li>Pension scheme</li>\n<li>Free on-site parking and meals</li>\n<li>Access to on-site gym and fitness classes</li>\n<li>Discounts on team merchandise and hospitality events</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Master&#39;s degree in a quantitative field</li>\n<li>Experience with cloud-based data platforms such as AWS or GCP</li>\n<li>Experience with big data technologies such as Hadoop or Spark</li>\n<li>Certification in data science or machine learning</li>\n<li>Experience with data governance and quality processes</li>\n</ul>\n<p>If you are a motivated and talented Data Scientist looking for a new challenge, please submit your application. We look forward to hearing from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6e6d3e44-db8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Williams Racing","sameAs":"https://careers.williamsf1.com","logo":"https://logos.yubhub.co/careers.williamsf1.com.png"},"x-apply-url":"https://careers.williamsf1.com/job/test-and-validation-senior-test-engineer-in-grove-wantage-jid-395","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","R","Tableau","Power BI","AWS","GCP","Hadoop","Spark","Data visualisation","Machine learning","Data governance","Data quality"],"x-skills-preferred":["Cloud-based data platforms","Big data technologies","Certification in data science or machine learning"],"datePosted":"2026-03-07T20:04:31.518Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Grove, Oxfordshire"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"Python, R, Tableau, Power BI, AWS, GCP, Hadoop, Spark, Data visualisation, Machine learning, Data governance, Data quality, Cloud-based data platforms, Big data technologies, Certification in data science or machine learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c873a489-0dc"},"title":"Data Engineer, Analytics","description":"<p><strong>Data Engineer, Analytics</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the team</strong></p>\n<p>The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to consumers and businesses.</p>\n<p>We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth.</p>\n<p><strong>About the role</strong></p>\n<p>We&#39;re seeking a Data Engineer to take the lead in building our data pipelines and core tables for OpenAI. These pipelines are crucial for powering analyses, safety systems that guide business decisions, product growth, and prevent bad actors. If you&#39;re passionate about working with data and are eager to create solutions with significant impact, we&#39;d love to hear from you. This role also provides the opportunity to collaborate closely with the researchers behind ChatGPT and help them train new models to deliver to users. As we continue our rapid growth, we value data-driven insights, and your contributions will play a pivotal role in our trajectory. Join us in shaping the future of OpenAI!</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, build and manage our data pipelines, ensuring all user event data is seamlessly integrated into our data warehouse.</li>\n</ul>\n<ul>\n<li>Develop canonical datasets to track key product metrics including user growth, engagement, and revenue.</li>\n</ul>\n<ul>\n<li>Work collaboratively with various teams, including, Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand their data needs and provide solutions.</li>\n</ul>\n<ul>\n<li>Implement robust and fault-tolerant systems for data ingestion and processing.</li>\n</ul>\n<ul>\n<li>Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to bear.</li>\n</ul>\n<ul>\n<li>Ensure the security, integrity, and compliance of data according to industry and company standards.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering).</li>\n</ul>\n<ul>\n<li>Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.</li>\n</ul>\n<ul>\n<li>Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3).</li>\n</ul>\n<ul>\n<li>Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.</li>\n</ul>\n<ul>\n<li>Solid understanding of Spark and ability to write, debug and optimize Spark code.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c873a489-0dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/fc5bbc77-a30c-4e7a-9acc-8a2e748545b4","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $385K • Offers Equity","x-skills-required":["Python","Scala","Java","Hadoop","Flink","HDFS","S3","Airflow","Dagster","Prefect","Spark"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:20:01.101Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Scala, Java, Hadoop, Flink, HDFS, S3, Airflow, Dagster, Prefect, Spark","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7bb50768-9c7"},"title":"Data Scientist","description":"<p><strong>Apply now!</strong></p>\n<p>We are seeking a highly skilled Data Scientist to join our team. As a Data Scientist, you will be responsible for analysing large datasets to gain insights and improve our racing performance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Collect and process large datasets from various sources, including telemetry data, weather forecasts, and track conditions</li>\n<li>Develop and implement machine learning models to predict racing outcomes and identify areas for improvement</li>\n<li>Collaborate with our engineering team to integrate data-driven insights into our racing strategy</li>\n<li>Develop and maintain data visualisation tools to communicate insights to the team</li>\n<li>Stay up-to-date with the latest developments in data science and machine learning</li>\n<li>Work closely with our data engineer to ensure data quality and integrity</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>PhD in Computer Science, Mathematics, or a related field</li>\n<li>Strong programming skills in Python, R, or MATLAB</li>\n<li>Experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch</li>\n<li>Strong understanding of statistical concepts and data visualisation techniques</li>\n<li>Excellent communication and collaboration skills</li>\n<li>Ability to work in a fast-paced environment and meet deadlines</li>\n</ul>\n<p><strong>Preferred Skills:</strong></p>\n<ul>\n<li>Experience with big data technologies such as Hadoop, Spark, or NoSQL databases</li>\n<li>Knowledge of SQL and database design</li>\n<li>Familiarity with cloud-based data platforms such as AWS or Google Cloud</li>\n<li>Experience with data visualisation tools such as Tableau, Power BI, or D3.js</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a professional motorsport team</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Access to state-of-the-art technology and equipment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p><strong>How to Apply:</strong></p>\n<p>If you are a motivated and talented Data Scientist looking for a new challenge, please submit your application, including your CV and a cover letter, to [insert contact email]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7bb50768-9c7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"W Racing Team","sameAs":"https://www.w-racingteam.com","logo":"https://logos.yubhub.co/w-racingteam.com.png"},"x-apply-url":"https://www.w-racingteam.com/manufacturing/careers/stage","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","R","MATLAB","scikit-learn","TensorFlow","PyTorch","SQL","database design"],"x-skills-preferred":["Hadoop","Spark","NoSQL databases","Tableau","Power BI","D3.js"],"datePosted":"2026-03-06T14:27:32.187Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"empty"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"Python, R, MATLAB, scikit-learn, TensorFlow, PyTorch, SQL, database design, Hadoop, Spark, NoSQL databases, Tableau, Power BI, D3.js"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f5e6e615-f4f"},"title":"Principal Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Principal Applied Scientist at their Bengaluru office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising the field of artificial intelligence. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and research organization.</p>\n<p><strong>About the Role</strong></p>\n<p>We are a team of applied scientists working on machine learning components in the whole sponsored search stack. Our team works on problems related to machine learning, deep learning, natural language processing, multi-arm bandit, optimization, information retrieval, and auction theory, among others. Our work entails building large-scale machine learning systems for ad matching, filtration, ranking, and multi objective optimization, and several other ML-driven business problems.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Conduct in-depth market research across AI and research sectors, identifying emerging trends, competitive threats, and partnership opportunities that directly inform the company&#39;s quarterly strategic planning sessions</li>\n<li>Design, implement, analyze, tune complex algorithms and ML systems and the supporting infrastructure for operating on large datasets</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>MS/BS in CS/EE, mathematical or machine learning related disciplines, with 10 or more years of experience</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Solid understanding of probability, statistics, machine learning, data science</li>\n<li>A/B testing &amp; analysis of ML models, and optimizing models for accuracy</li>\n<li>Experience with Hadoop, Spark, or other distributed computing systems for large-scale training &amp; prediction with ML models</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>End-to-end system design: data analysis, feature engineering, technique selection &amp; implementation, debugging, and maintenance in production</li>\n<li>Experience implementing machine learning algorithms or research papers from scratch</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Access to cutting-edge technology and resources</li>\n<li>Flexible work arrangements and work-life balance</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f5e6e615-f4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-applied-scientist-2/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["machine learning","deep learning","natural language processing","multi-arm bandit","optimization","information retrieval","auction theory"],"x-skills-preferred":["TensorFlow","PyTorch","Hadoop","Spark"],"datePosted":"2026-03-06T07:33:46.212Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, deep learning, natural language processing, multi-arm bandit, optimization, information retrieval, auction theory, TensorFlow, PyTorch, Hadoop, Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26c57034-3a3"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising haptic entertainment technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the cinema and simulation markets.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Software Engineer, you&#39;ll design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines. You&#39;ll build data applications that directly impact Microsoft Ads&#39; double-digit annual growth. You&#39;ll work on cutting-edge technologies in distributed systems, machine learning, and big data.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Conduct in-depth market research across cinema and simulation sectors, identifying emerging trends, competitive threats, and partnership opportunities that directly inform the company&#39;s quarterly strategic planning sessions</li>\n<li>Work with BingAds stakeholders to determine requirements for new features to drive up Ads business</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in Azure</li>\n<li>Experience in Machine learning and online system design, implementation and qualification</li>\n<li>2+ years’ experience in Distributed Systems and Big Data Technologies such as Spark, Hadoop, HDFS, Kafka, Flink, Scala</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong problem-solving skills</li>\n<li>Excellent communication and collaboration skills</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary</li>\n<li>Comprehensive benefits package</li>\n<li>Opportunities for professional growth and development</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26c57034-3a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-79/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Azure","Machine learning","Distributed Systems","Big Data Technologies"],"x-skills-preferred":["Spark","Hadoop","HDFS","Kafka","Flink","Scala"],"datePosted":"2026-03-06T07:33:27.032Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Azure, Machine learning, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a14e2c8b-37a"},"title":"Member of Technical Staff - Data Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineer at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Data Engineer, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a14e2c8b-37a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["data engineering","data science","software development","data modeling","Apache Hadoop","Kafka","NoSQL"],"x-skills-preferred":["Python","Java","Spark","SQL"],"datePosted":"2026-03-06T07:28:47.722Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e7e8ccb3-342"},"title":"Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering M5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e7e8ccb3-342","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineering-manager-microsoft-ai-copilot/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["data engineering","data science","software development","data modeling","Apache Hadoop","Kafka","NoSQL"],"x-skills-preferred":["Python","Java","Spark","SQL"],"datePosted":"2026-03-06T07:28:45.799Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_90fb1727-c33"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Senior Software Engineer at their Mountain View office. This role sits at the heart of driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Software Engineer, you will play a key role in designing and implementing scalable data solutions. You will collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. You will develop and optimize data models to support data analytics, utilize advanced analytics techniques to extract insights from large datasets, and drive data-driven decision making.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>\n<li>Develop and optimize data models to support data analytics.</li>\n<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_90fb1727-c33","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-68/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["data engineering","data analytics","software development","data modeling"],"x-skills-preferred":["Apache Spark","Hadoop","SQL","RDBMS","Key Value stores"],"datePosted":"2026-03-06T07:28:37.496Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data analytics, software development, data modeling, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5e7ed194-bd7"},"title":"Member of Technical Staff - Data Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineer at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Data Engineer, you will be responsible for building scalable data pipelines for sourcing, transforming, and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming, and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5e7ed194-bd7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["data engineering","data science","software development","data modeling","Apache Hadoop","Kafka","NoSQL"],"x-skills-preferred":["Python","Java","Spark","SQL"],"datePosted":"2026-03-06T07:28:32.962Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c06df3c0-c8c"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Senior Software Engineer at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the advertising market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Software Engineer, you will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions. You will collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. You will develop and optimize data models to support data analytics and utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision-making.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>\n<li>Develop and optimize data models to support data analytics.</li>\n<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision-making.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary range of $119,800 - $234,700 per year.</li>\n<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c06df3c0-c8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-67/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["data engineering","data analytics","SQL","RDBMS","Key Value stores"],"x-skills-preferred":["Apache Spark","Hadoop","machine learning","data science"],"datePosted":"2026-03-06T07:28:22.315Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data analytics, SQL, RDBMS, Key Value stores, Apache Spark, Hadoop, machine learning, data science","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e3ce7035-a47"},"title":"Software Engineer II","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Software Engineer II to join their Ads Data Platform Team. This role is available in Redmond, WA and is a great opportunity for those who are passionate about solving complex problems and driving innovation.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Software Engineer II, you will design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines. You will build data applications that directly impact Microsoft Ads&#39; double-digit annual growth. You will work on cutting-edge technologies in distributed systems, machine learning, and big data.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines.</li>\n<li>Build data applications that directly impact Microsoft Ads&#39; double-digit annual growth.</li>\n<li>Work on cutting-edge technologies in distributed systems, machine learning, and big data.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>2+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience in Azure.</li>\n<li>Experience in machine learning and online system design, implementation and qualification.</li>\n<li>2+ years&#39; experience in Distributed Systems and Big Data Technologies such as Spark, Hadoop, HDFS, Kafka, Flink, Scala.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong problem-solving skills and ability to work in a fast-paced environment.</li>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary range of $100,600 - $199,000 per year.</li>\n<li>Comprehensive benefits package including health, dental, and vision insurance.</li>\n<li>401(k) matching program.</li>\n<li>Paid time off and holidays.</li>\n<li>Opportunities for professional growth and development.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e3ce7035-a47","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-ii-7/","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Azure","machine learning","Distributed Systems","Big Data Technologies"],"x-skills-preferred":["Spark","Hadoop","HDFS","Kafka","Flink","Scala"],"datePosted":"2026-03-06T07:28:17.990Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Azure, machine learning, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_20f855c9-333"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Senior Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the advertising market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Senior Software Engineer, you will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions. You will collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. You will develop and optimize data models to support data analytics, utilize advanced analytics techniques to extract insights from large datasets, and drive data-driven decision-making.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>\n<li>Develop and optimize data models to support data analytics.</li>\n<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision-making.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>\n<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>\n<li>Solid communication skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary range of USD $119,800 – $234,700 per year.</li>\n<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_20f855c9-333","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-66/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["data engineering","data analytics","SQL","RDBMS","Key Value stores","Apache Spark","Hadoop"],"x-skills-preferred":["machine learning","data science","containerization","orchestration"],"datePosted":"2026-03-06T07:28:08.420Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data analytics, SQL, RDBMS, Key Value stores, Apache Spark, Hadoop, machine learning, data science, containerization, orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_71dd03c1-3da"},"title":"Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering M5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_71dd03c1-3da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineering-manager-microsoft-ai-copilot-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 - $274,800 per year","x-skills-required":["data engineering","data science","software development","data modeling","Apache Hadoop","Kafka","NoSQL"],"x-skills-preferred":["Python","Java","Spark","SQL"],"datePosted":"2026-03-06T07:28:00.562Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b85acc3-a49"},"title":"Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering M5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year.</li>\n<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b85acc3-a49","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineering-manager-microsoft-ai-copilot-3/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 - $274,800 per year","x-skills-required":["data engineering","data science","software development","data modeling","Apache Hadoop","Kafka","NoSQL"],"x-skills-preferred":["Python","Java","Spark","SQL"],"datePosted":"2026-03-06T07:27:42.814Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_219eb7e5-619"},"title":"Member of Technical Staff - Data Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Data Engineer, you will be responsible for building scalable data pipelines for sourcing, transforming, and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming, and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_219eb7e5-619","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-3/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["data engineering","data science","software development","data modeling","Apache Hadoop","Kafka","NoSQL"],"x-skills-preferred":["Python","Java","Spark","SQL"],"datePosted":"2026-03-06T07:26:42.731Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901a6402-db5"},"title":"Data Engineer","description":"<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Python and SQL</li>\n<li>Hands-on experience with Redshift, Airflow, DBT</li>\n<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901a6402-db5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Redshift","Airflow","DBT","Apache Spark"],"x-skills-preferred":["Apache Flink","Apache Kafka","Hadoop ecosystem components","ETL design patterns","performance tuning"],"datePosted":"2025-12-26T10:57:30.602Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chengdu"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning"}]}