{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/etl-processes"},"x-facet":{"type":"skill","slug":"etl-processes","display":"Etl Processes","count":19},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc2fb376-a15"},"title":"Data Scientist (Generative AI)","description":"<p>Are you passionate about innovative technologies and Generative AI? Do you want to create the basis for new AI solutions, develop prototypes, and productive models, rather than just analyzing data? Then join our team and shape the future of AI-supported products and data-driven solutions together with us.</p>\n<p>Your tasks will include:</p>\n<ul>\n<li>Data acquisition, cleaning, and feature engineering: extracting, transforming, and cleaning data from various sources, developing new features for ML and GenAI models, and agent systems</li>\n<li>Explorative data analysis and modeling: conducting analyses, developing and evaluating ML/GenAI prototypes, concepts for self-learning systems, and human-in-the-loop approaches</li>\n<li>Prototyping and integration: creating prototypes in Python, integrating models into systems or cloud environments, and implementing AI solutions based on LLM</li>\n<li>Identification of use cases: analyzing business processes, recognizing opportunities for GenAI, and deriving technical solutions, integrating them into existing system and process landscapes</li>\n<li>Project and stakeholder management: moderating workshops, close coordination with interdisciplinary teams, international project partners, and customers</li>\n</ul>\n<p>To be well-prepared for your path, you should have the following qualifications:</p>\n<ul>\n<li>Completed studies in computer science, data science, mathematics, statistics, or a comparable field with at least 4 years of professional experience - ideally in consulting or projects in the data science, ML, or AI field</li>\n<li>Passion for data, AI, and Generative AI, as well as enthusiasm for their strategic added value for the business</li>\n<li>Expertise in Python, SQL, ETL processes, RAG, ML/DL, LLMs, Pandas, and NumPy</li>\n<li>Your work style is characterized by self-responsibility, goal orientation, team ability, and hands-on mentality</li>\n</ul>\n<p>Worth knowing before departure:</p>\n<ul>\n<li>Start: after agreement - always at the beginning of a month</li>\n<li>Working hours: full-time (40 hours) and/or part-time possible; 30 vacation days</li>\n<li>Employment relationship: unlimited</li>\n<li>Field: consulting</li>\n<li>Language: secure German and English</li>\n<li>Flexibility and travel readiness</li>\n<li>Other: valid work permit; if necessary, we can apply for the work permit within our recruiting process. The procedure takes time and affects the start date</li>\n</ul>\n<p>At MHP, you grow continuously in an innovative and supportive environment. This makes us the perfect sparring partner for your career. Both for professional input and networking. We offer you:</p>\n<ul>\n<li>Appreciation. We support and appreciate colleagues as they are and celebrate our successes together</li>\n<li>We are always happy about creativity and new impulses</li>\n<li>Flexibility. Time-wise and location-wise - according to the project at home, in the office, or at the customer</li>\n<li>You have the opportunity to grow with us in tasks, knowledge, and responsibility</li>\n</ul>\n<p>Please apply quickly. Simply online via our Job Locator. There, you can send your application documents, such as resume, certificates, and possibly project lists, in a few clicks to us. A cover letter is not required.</p>\n<p>By the way: If your application reaches us, our recruiting team checks across departments whether there is a suitable position for you. Regardless of current job postings, we try to find the right job for you at MHP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc2fb376-a15","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"https://www.mhp.com","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18796","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive salary","x-skills-required":["Python","SQL","ETL processes","RAG","ML/DL","LLMs","Pandas","NumPy"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:26:40.834Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, ETL processes, RAG, ML/DL, LLMs, Pandas, NumPy"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7275ef33-009"},"title":"Staff Data Engineer","description":"<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>\n<p>Communicating Between Technical and Non-Technical Colleagues</p>\n<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>\n<p>Data Analysis and Synthesis</p>\n<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>\n<p>Data Development Process</p>\n<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>\n<p>Data Innovation</p>\n<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>\n<p>Data Integration Design</p>\n<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>\n<p>Data Modeling</p>\n<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>\n<p>Metadata Management</p>\n<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>\n<p>Problem Resolution</p>\n<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>\n<p>Programming and Build</p>\n<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>\n<p>Technical Understanding</p>\n<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>\n<p>Testing</p>\n<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7275ef33-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976928777","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$114,400 to $171,600","x-skills-required":["Proficiency in programming language such as Python or Java","Experience with Big Data technologies such as Hadoop, Spark, and Kafka","Familiarity with ETL processes and tools","Knowledge of SQL and NoSQL databases","Strong understanding of relational databases","Experience with data warehousing solutions","Proficiency with cloud platforms","Expertise in data modeling and design","Experience in designing and building scalable data pipelines","Experience with RESTful APIs and data integration"],"x-skills-preferred":["Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)","Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field","Strong analytical and communication skills","Ability to work collaboratively in a team environment","High level of accuracy and attention to detail"],"datePosted":"2026-04-18T22:12:56.654Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":171600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3fa0b80f-842"},"title":"Staff Software Engineer, Public Sector","description":"<p>Job Title: Staff Software Engineer, Public Sector</p>\n<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement software solutions for the public sector</li>\n<li>Work closely with cross-functional teams to develop and deploy software applications</li>\n<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>\n<li>Develop and maintain software documentation</li>\n<li>Participate in code reviews and ensure that code meets quality standards</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or related field</li>\n<li>5+ years of experience in software development</li>\n<li>Proficiency in programming languages such as Java, Python, or C++</li>\n<li>Experience with Agile development methodologies</li>\n<li>Strong understanding of software design patterns and principles</li>\n<li>Excellent communication and collaboration skills</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science or related field</li>\n<li>10+ years of experience in software development</li>\n<li>Experience with cloud-based technologies such as AWS or Azure</li>\n<li>Experience with DevOps practices</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p>Salary Range: $252,000-$362,000 USD</p>\n<p>Required Skills:</p>\n<ul>\n<li>Full Stack Development</li>\n<li>Cloud-Native Technologies</li>\n<li>Data Engineering</li>\n<li>AI Application Integration</li>\n<li>Problem Solving</li>\n<li>Collaboration and Communication</li>\n<li>Adaptability and Learning Agility</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with modern web development frameworks</li>\n<li>Familiarity with cloud platforms</li>\n<li>Understanding of containerization and container orchestration</li>\n<li>Knowledge of ETL processes</li>\n<li>Understanding of data modeling, data warehousing, and data governance principles</li>\n<li>Familiarity with integrating Large Language Models</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3fa0b80f-842","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4674913005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$362,000 USD","x-skills-required":["Full Stack Development","Cloud-Native Technologies","Data Engineering","AI Application Integration","Problem Solving","Collaboration and Communication","Adaptability and Learning Agility"],"x-skills-preferred":["Experience with modern web development frameworks","Familiarity with cloud platforms","Understanding of containerization and container orchestration","Knowledge of ETL processes","Understanding of data modeling, data warehousing, and data governance principles","Familiarity with integrating Large Language Models"],"datePosted":"2026-04-18T16:00:27.694Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":362000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_41a793fc-9ff"},"title":"Staff Data Analyst (Bengaluru)","description":"<p>We are looking for an experienced data analyst to join Okta&#39;s enterprise data team. The successful candidate will have a strong background in financial and business performance analytics, and a proven track record of proactively identifying and solving complex business problems through data.</p>\n<p>In this role, you will be focusing on Finance data and reporting, partnering with Finance, Accounting, Sales Operations, and Executive Leadership to implement enhancements and build end-to-end financial insights across the organization.</p>\n<p>Responsibilities: Proactively partner with Finance and Accounting leadership to set the analytics roadmap and identify high-impact opportunities for data to drive business value. Serve as a trusted advisor to senior Finance and business stakeholders, influencing their strategy and decision-making through data-driven narratives. Translate ambiguous business questions into clear, structured analytical requirements and measurable project plans. Partner with Finance and Operations teams to provide best practices in financial metric definition, dashboard design, and modeling. Conduct deep-dive, root-cause analyses on performance variances, translating complex data into clear, strategic recommendations. Design and build scalable data models to support enterprise-wide financial reporting. Own the entire lifecycle of financial data products, from initial concept to driving adoption and measuring business impact. Enable self-service data consumption for business users. Develop and champion new analytical methods and tools to continuously improve financial reporting and decision-making processes. Work with Data Engineering to define, implement, and build new data sources and transformations.</p>\n<p>Requirements: 8+ years&#39; experience as a Data Analyst 6+ years&#39; hands-on SQL experience in a work environment Expertise in developing and maintaining complex financial models, including scenario planning and predictive forecasting, and analysis. Experience with and had built large scale data models (e.g., using dbt or Airflow), including proven experience in modeling intricate financial metrics. Experience with data management, documenting processes and data flows, and ensuring data quality. Familiarity with data quality frameworks and monitoring tools. Experience with building AI-ready data and semantic layers. Experience with building reports and visualizations to represent data intuitively in Tableau or similar data visualization tools. Exceptional communication, presentation, and storytelling skills, with the ability to convey complex analytical findings to executive audiences. Demonstrated ability to operate independently and execute projects with minimal supervision. Experience with ETL processes, software development, and lifecycle awareness. Familiarity with data governance and report/data catalog applications (Collibra, Aliation, Data.world).</p>\n<p>The Okta Experience Supporting Your Well-Being Driving Social Impact Developing Talent and Fostering Connection + Community</p>\n<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_41a793fc-9ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7645984","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data management","financial modeling","data quality frameworks","data visualization","ETL processes","software development","data governance"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:32.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"SQL, data management, financial modeling, data quality frameworks, data visualization, ETL processes, software development, data governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc8eb5bc-349"},"title":"Staff Data Analyst","description":"<p>As a Staff Data Analyst at Okta, you will play a key role in creating the foundation for data-based decision-making across the company&#39;s functional business teams. You will work with internal customers to identify ways to effectively leverage data using cutting-edge cloud and big data technologies to drive business insights.</p>\n<p>Your responsibilities will include serving as the definitive SME for Customer First analytics, defining the data models, metrics, and value drivers that steer company strategy. You will also lead the exploration and integration of AI-driven tools to automate workflows and pioneer new methodologies for data discovery.</p>\n<p>Additionally, you will architect the analytics strategy for customer insights, leveraging product telemetry and public data to identify and predict risk and growth signals. You will own the vision for our semantic layer, ensuring it supports advanced modeling, self-service, and high-integrity dashboarding.</p>\n<p>You will participate in high-impact initiatives in predictive modeling (cross-sell, churn, LTV) that directly influence GTM execution. You will partner with leadership and account teams to translate raw insights into a high-impact, action-oriented Command Center, empowering account teams to instantly prioritize and execute on the most urgent opportunities and risks.</p>\n<p>You will also partner with Engineering to co-design data pipelines and transformations that ensure long-term scalability and data quality. You will set the bar for excellence in data storytelling and modeling, mentoring the broader team on best practices and process improvement.</p>\n<p>To succeed in this role, you will need to have a passion for driving decisions and insights through data. You will be detail-oriented, analytical, and able to solve big problems. You will also need to be able to effectively communicate with team members and business partners.</p>\n<p>In terms of qualifications, you will need to have a BS in CS, MIS, or a related technical degree. You will also need to have 7+ years of experience as a Data Analyst/Data Engineer/BI Developer. Advanced SQL experience is also required.</p>\n<p>Preferred qualifications include experience with building reports and visualizations to represent data intuitively in Tableau or similar data visualization tools. Advanced analytics, data science, AI/ML experience and techniques are also a plus.</p>\n<p>Finally, you will need to be able to work cross-functionally and communicate with technical and non-technical teams. Experience with ETL processes, software development, and lifecycle awareness, using Python, Java, Databricks, and Snowflake is also a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc8eb5bc-349","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7792010","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Data Analysis","Data Visualization","Tableau","Python","Java","Databricks","Snowflake"],"x-skills-preferred":["Advanced Analytics","Data Science","AI/ML","ETL Processes","Software Development","Lifecycle Awareness"],"datePosted":"2026-04-18T15:48:00.277Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Data Analysis, Data Visualization, Tableau, Python, Java, Databricks, Snowflake, Advanced Analytics, Data Science, AI/ML, ETL Processes, Software Development, Lifecycle Awareness"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e00b7052-70b"},"title":"Senior Business Systems Analyst, Finance Systems","description":"<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>\n<p>You will develop Prism Analytics and Accounting Center solutions, gather requirements and build reporting capabilities, and collaborate closely with cross-functional teams to drive the successful adoption of our new ERP platform.</p>\n<p>This is a critical role that will directly shape how Anthropic&#39;s finance organisation operates as we scale toward public company readiness. You will work at the intersection of finance domain expertise and technical implementation, partnering with the implementation partner, engineering teams, and finance stakeholders to build a world-class financial systems foundation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>ERP Core Financials Implementation: Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>\n</ul>\n<ul>\n<li>Financial Data Model (FDM) Design: Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>\n</ul>\n<ul>\n<li>Prism Analytics Development: Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>\n</ul>\n<ul>\n<li>Requirements Gathering &amp; Reporting: Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>\n</ul>\n<ul>\n<li>Workshop Participation &amp; Solution Design: Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>\n</ul>\n<ul>\n<li>Cross-Functional Collaboration: Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>\n</ul>\n<ul>\n<li>Cutover &amp; Hypercare Planning: Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>\n</ul>\n<ul>\n<li>Platform Expansion &amp; Adoption: Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>\n</ul>\n<ul>\n<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>\n</ul>\n<ul>\n<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>\n</ul>\n<ul>\n<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>\n</ul>\n<ul>\n<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>\n</ul>\n<ul>\n<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>\n</ul>\n<ul>\n<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>\n</ul>\n<ul>\n<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>\n</ul>\n<ul>\n<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>\n</ul>\n<ul>\n<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>\n</ul>\n<ul>\n<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>\n</ul>\n<ul>\n<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>\n</ul>\n<ul>\n<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>\n</ul>\n<ul>\n<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>\n</ul>\n<ul>\n<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>\n</ul>\n<ul>\n<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>\n</ul>\n<p>The annual compensation range for this role is $205,000-$265,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e00b7052-70b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4991194008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000-$265,000 USD","x-skills-required":["Workday Financials","Workday Financial Data Model (FDM)","Chart of Accounts design","Worktags configuration","Dimensional hierarchies","Accounting Books setup","Prism Analytics","Data modeling","Source integration","Calculated fields","Report development","ERP implementation lifecycles","Requirements gathering","Configuration","Testing","Data migration","Cutover planning","Hypercare","Financial accounting processes","General Ledger","Multi-entity consolidation","Intercompany accounting","Management reporting","Stakeholder management","Communication skills","Analytical skills","Problem-solving skills","Data accuracy and integrity"],"x-skills-preferred":["SQL","Python","Scripting languages","BigQuery","Cloud data warehouses","SOX compliance requirements","Internal controls","EPM/FP&A systems","Pigment","Anaplan","Adaptive Planning","ERP integration","High-growth technology companies","IPO readiness","Workday HCM","HCM-Financials integration points","Data migration tools","ETL processes","Reconciliation frameworks"],"datePosted":"2026-04-18T15:45:44.214Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"Workday Financials, Workday Financial Data Model (FDM), Chart of Accounts design, Worktags configuration, Dimensional hierarchies, Accounting Books setup, Prism Analytics, Data modeling, Source integration, Calculated fields, Report development, ERP implementation lifecycles, Requirements gathering, Configuration, Testing, Data migration, Cutover planning, Hypercare, Financial accounting processes, General Ledger, Multi-entity consolidation, Intercompany accounting, Management reporting, Stakeholder management, Communication skills, Analytical skills, Problem-solving skills, Data accuracy and integrity, SQL, Python, Scripting languages, BigQuery, Cloud data warehouses, SOX compliance requirements, Internal controls, EPM/FP&A systems, Pigment, Anaplan, Adaptive Planning, ERP integration, High-growth technology companies, IPO readiness, Workday HCM, HCM-Financials integration points, Data migration tools, ETL processes, Reconciliation frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f49203e0-6c6"},"title":"Research Engineer, Science of Scaling","description":"<p>We are seeking a Research Engineer/Scientist to join the Science of Scaling team, responsible for developing the next generation of large language models. In this role, you will work at the intersection of cutting-edge research and practical engineering, contributing to the development of safe, steerable, and trustworthy AI systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Conduct research into the science of converting compute into intelligence</li>\n<li>Independently lead small research projects while collaborating with team members on larger initiatives</li>\n<li>Design, run, and analyze scientific experiments to advance our understanding of large language models</li>\n<li>Optimize training infrastructure to improve efficiency and reliability</li>\n<li>Develop dev tooling to enhance team productivity</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have significant software engineering experience and a proven track record of building complex systems</li>\n<li>Hold an advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field</li>\n<li>Are proficient in Python and experienced with deep learning frameworks</li>\n<li>Are results-oriented with a bias towards flexibility and impact</li>\n<li>Enjoy pair programming and collaborative work, and are willing to take on tasks outside your job description to support the team</li>\n<li>View research and engineering as two sides of the same coin, seeking to understand all aspects of the research program to maximize impact</li>\n<li>Care about the societal impacts of your work and have ambitious goals for AI safety and general progress</li>\n</ul>\n<p>Strong candidates may have:</p>\n<ul>\n<li>Experience with JAX</li>\n<li>Experience with reinforcement learning</li>\n<li>Experience working on high-performance, large-scale ML systems</li>\n<li>Familiarity with accelerators, Kubernetes, and OS internals</li>\n<li>Experience with language modeling using transformer architectures</li>\n<li>Background in large-scale ETL processes</li>\n<li>Experience with distributed training at scale (thousands of accelerators)</li>\n</ul>\n<p>Strong candidates need not have:</p>\n<ul>\n<li>Experience in all of the above areas , we value breadth of interest and willingness to learn over checking every box</li>\n<li>Prior work specifically on language models or transformers; strong engineering fundamentals and ML knowledge transfer well</li>\n<li>An advanced degree , exceptional engineers with strong research instincts are equally encouraged to apply</li>\n</ul>\n<p>The annual compensation range for this role is £260,000-£630,000 GBP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f49203e0-6c6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5126127008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£260,000-£630,000 GBP","x-skills-required":["Python","Deep learning frameworks","Software engineering","Machine learning","Advanced degree in Computer Science or related field"],"x-skills-preferred":["JAX","Reinforcement learning","High-performance, large-scale ML systems","Accelerators","Kubernetes","OS internals","Language modeling using transformer architectures","Large-scale ETL processes","Distributed training at scale"],"datePosted":"2026-04-18T15:42:40.887Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks, Software engineering, Machine learning, Advanced degree in Computer Science or related field, JAX, Reinforcement learning, High-performance, large-scale ML systems, Accelerators, Kubernetes, OS internals, Language modeling using transformer architectures, Large-scale ETL processes, Distributed training at scale","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":630000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6e92655b-cbb"},"title":"Senior Data Scientist - Banking","description":"<p>We&#39;re looking for a full-stack Data Scientist to support our Cards &amp; Credit roadmap, partnering closely with Product, Engineering, Design, Underwriting, and Operations to shape how our card and credit products evolve and scale.</p>\n<p>In this role, you&#39;ll apply strong analytical judgment and product intuition to help us understand customer behaviour, evaluate trade-offs, and make smart investment decisions across the cards and lending lifecycles , from eligibility and activation to spend, retention, incentives, and credit performance. You&#39;ll help build a data-informed culture across Mercury so teams can move quickly, measure what matters, and invest intelligently.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Bringing impeccable communication and complete ownership , independently identifying opportunities, developing strong points of view, and influencing executives, Cards &amp; Credit leaders, and cross-functional partners through clear, concise, and persuasive storytelling.</li>\n<li>Developing a nuanced understanding of cardholder behaviour and economics, helping teams reason about trade-offs between growth, engagement, risk, and unit economics.</li>\n<li>Defining, owning, and analysing metrics that inform both tactical decisions and long-term strategy across the cards and credit lifecycle (e.g., eligibility, activation, spend, utilisation, rewards, retention, loss signals).</li>\n<li>Designing and evaluating experiments using rigorous statistical approaches, including A/B testing, cohort analysis, causal inference techniques, and trend analysis.</li>\n<li>Building and improving data pipelines and tools to streamline data collection, processing, and analysis workflows, ensuring the integrity, reliability, and security of data assets.</li>\n<li>Building and deploying predictive models to forecast key outcomes, inform product treatments, and deepen understanding of causal drivers.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>7+ years of experience working with large datasets to drive product or business impact in data science or analytics roles.</li>\n<li>Fluency in SQL and comfort with Python.</li>\n<li>Strong judgment in defining and analysing product metrics, running experiments, and translating ambiguous questions into structured analyses.</li>\n<li>Exceptional proactivity and independence , identifying opportunities, forming strong points of view, and making your case to stakeholders.</li>\n<li>Experience with ETL processes and modern data modelling (e.g., dbt, dimensional models, Airflow), with a solid understanding of how data is produced and consumed.</li>\n<li>Experience in analytical approaches ranging from behavioural modelling to experimentation to optimisation , and, importantly, know when simpler approaches are the right answer.</li>\n<li>Apply AI tools to accelerate analytical and business workflows, improving scalability, decision quality, and reducing manual or repetitive work across teams.</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience working on cards or credit products, with familiarity in card economics and lifecycle concepts (e.g., spend behaviour, interchange, rewards and incentives, utilisation, credit limits, retention).</li>\n<li>Experience developing quantitative pricing models or engines (e.g., dynamic pricing, incentive optimisation, or marketplace pricing systems).</li>\n<li>Experience applying optimisation techniques to resource allocation or decision systems (e.g., customer operations, capacity planning, or policy optimisation).</li>\n<li>Experience building or supporting credit models, including probability of default modelling, cashflow modelling, or dynamic credit limit setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6e92655b-cbb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5799320004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,700 - $250,900 USD","x-skills-required":["SQL","Python","ETL processes","modern data modelling","A/B testing","cohort analysis","causal inference techniques","trend analysis","data pipelines","predictive models"],"x-skills-preferred":["cardholder behaviour and economics","quantitative pricing models","optimisation techniques","credit models","probability of default modelling","cashflow modelling","dynamic credit limit setting"],"datePosted":"2026-04-17T12:45:16.180Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"SQL, Python, ETL processes, modern data modelling, A/B testing, cohort analysis, causal inference techniques, trend analysis, data pipelines, predictive models, cardholder behaviour and economics, quantitative pricing models, optimisation techniques, credit models, probability of default modelling, cashflow modelling, dynamic credit limit setting","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200700,"maxValue":250900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18fad01e-942"},"title":"Salesforce Developer","description":"<p>We&#39;re hiring a Salesforce Developer to deepen Mercury&#39;s technical bench. This role fills a critical gap today: hands-on engineering capacity to implement platform capabilities that already exist on paper , and reduce tool sprawl by building stronger foundations directly into Salesforce and adjacent systems.</p>\n<p>As a Salesforce Developer, you&#39;ll work closely with Architecture, Data, TPM, and Systems Experience to turn intent into reality. Your responsibilities will include:</p>\n<ul>\n<li>Building and maintaining Salesforce functionality (flows, automation, objects, permissions)</li>\n<li>Implementing architectural designs without diverging from intent</li>\n<li>Improving reliability, performance, and maintainability of GTM systems</li>\n<li>Reducing tech debt and replacing fragile workarounds with durable solutions</li>\n<li>Partnering with Data Strategy to ensure clean data generation</li>\n<li>Supporting integrations and tooling across the revenue stack</li>\n<li>Participating in incident response and platform debugging</li>\n<li>Helping migrate functionality into core platforms rather than adding new tools</li>\n</ul>\n<p>To succeed in this role, you&#39;ll need:</p>\n<ul>\n<li>8+ years experience in Salesforce development or platform engineering roles</li>\n<li>Strong hands-on experience with Salesforce automation, flows, object models, permissions, and integrations</li>\n<li>Excited to own and maintain API-based integrations between Salesforce and downstream/upstream systems</li>\n<li>Demonstrated ability to build and refactor systems with durability, performance, and maintainability in mind</li>\n<li>Experience partnering with cross-functional teams to implement technical solutions</li>\n<li>Strong debugging and problem-solving skills in production environments</li>\n<li>Clear communication skills and comfort explaining technical tradeoffs</li>\n</ul>\n<p>Preferred qualifications include experience with Salesforce Data Cloud, familiarity with GTM workflows, revenue operations, or customer lifecycle systems, and exposure to data pipelines, ETL processes, or downstream analytics usage.</p>\n<p>The total rewards package at Mercury includes base salary, equity (stock options), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate&#39;s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p>Our target new hire base salary ranges for this role are:</p>\n<ul>\n<li>US employees in New York City, Los Angeles, Seattle, or the San Francisco Bay Area: $158,400 - 198,000</li>\n<li>US employees outside of New York City, Los Angeles, Seattle, or the San Francisco Bay Area: $142,600 - 178,200</li>\n<li>Canadian employees (any location): CAD $149,700 - $187,100</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18fad01e-942","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5857783004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$142,600 - 198,000","x-skills-required":["Salesforce development","Platform engineering","Automation","Flows","Object models","Permissions","Integrations","API-based integrations","Data strategy","GTM systems","Revenue stack","Incident response","Platform debugging"],"x-skills-preferred":["Salesforce Data Cloud","GTM workflows","Revenue operations","Customer lifecycle systems","Data pipelines","ETL processes","Downstream analytics usage"],"datePosted":"2026-04-17T12:45:15.149Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Salesforce development, Platform engineering, Automation, Flows, Object models, Permissions, Integrations, API-based integrations, Data strategy, GTM systems, Revenue stack, Incident response, Platform debugging, Salesforce Data Cloud, GTM workflows, Revenue operations, Customer lifecycle systems, Data pipelines, ETL processes, Downstream analytics usage","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":142600,"maxValue":198000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a3e7e545-094"},"title":"FBS Data Production Support Analyst (Data Pipelines)","description":"<p><strong>Role Overview</strong></p>\n<p>The purpose of this role is to ensure smooth operations of our production data assets. Activities will include monitoring production systems for incident occurrence, alerting applicable parties when incidents arise and incident triaging and management. They will also carry out activities to prevent production incidents.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Work with Data Pipelines, handling incidents and RCA</li>\n<li>Administers, analyzes, and prioritizes systems issues and negotiates a course of action for resolution.</li>\n<li>Supports workflow and solutions; trouble shoots user errors and supports reporting capabilities.</li>\n<li>Utilizes system monitoring utilities to monitor system availability.</li>\n<li>Extracts and compiles data system monitoring data to create availability scorecards and reports.</li>\n<li>System Monitoring: Continuously monitor IT systems to ensure optimal performance and availability, identifying and addressing potential issues before they escalate.</li>\n<li>Monitoring and Maintenance: Regularly monitor production data assets to ensure they are functioning correctly and efficiently.  Alerting applicable parties if an issue arises in production.</li>\n<li>Issue Resolution: Work with data team to identify, diagnose, and resolve technical issues related to production data assets. Work with relevant teams to implement effective solutions.</li>\n<li>Incident Management: Manage and prioritize incidents, ensuring that they are resolved promptly and efficiently and follow the incident management process. Document incidents and resolutions for future reference.</li>\n<li>Incident Management: Respond to and resolve technical issues reported by users or automated monitoring alerts. This includes diagnosing problems, identifying solutions, and implementing fixes.</li>\n<li>Problem Analysis: Analyze recurring issues to identify root causes and implement long-term solutions to prevent future occurrences.</li>\n<li>Root Cause Analysis: Conduct thorough investigations to determine the underlying causes of recurring incidents and implement preventive measures.</li>\n<li>Preventative Measures: Identify incidents that recur and put solutions in place to prevent recurrence.</li>\n<li>Data Integrity: Work with data team to ensure the accuracy and integrity of data produced and provided to the business, work with the data teams to implement and maintain quality control measures to prevent errors.</li>\n<li>Documentation: Maintain comprehensive documentation of processes, system configurations, and troubleshooting procedures.  Ensure documentation is created and owned be it by the data team or the production support team.</li>\n<li>Support: Provide support to data teams, data users and stakeholders. Respond to inquiries and assist with requests as applicable.</li>\n<li>Optimization: Identify opportunities to optimize data production processes and implement improvements to enhance efficiency.</li>\n<li>Performance Optimization: Analyze system performance and identify areas for improvement. Suggest and implement changes to enhance system efficiency and reliability.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a3e7e545-094","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/ffvEvDAAYzjgBfJeCMdK9E/remote-fbs-data-production-support-analyst-(data-pipelines)-in-mexico-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Pipelines","Incident Management","System Monitoring","Data Integrity","Documentation","Problem Analysis","Root Cause Analysis","Preventative Measures","SQL","Python","Java"],"x-skills-preferred":["ETL processes","Database management","Data warehousing"],"datePosted":"2026-03-09T17:03:31.282Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Finance","skills":"Data Pipelines, Incident Management, System Monitoring, Data Integrity, Documentation, Problem Analysis, Root Cause Analysis, Preventative Measures, SQL, Python, Java, ETL processes, Database management, Data warehousing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bc13ad37-8f9"},"title":"ACT Integration Solutions Specialist – Aladdin Platform, Associate","description":"<p>About this role</p>\n<p>We are seeking a dynamic Specialist with a proven track record in data integration and data conversion to join our growing Integration Solutions Practice. This role is pivotal in scaling our projects across various asset classes and front-to-back Aladdin services, employing Aladdin’s Studio platform and its robust capabilities in data integration and onboarding – which include file-based mechanisms, APIs, and the Aladdin Data Cloud. This is a client-facing position offering an opportunity to collaborate with a broad range of internal stakeholders at BlackRock.</p>\n<p>Main Function</p>\n<p>As an Integration Solutions Specialist for the Aladdin Platform, you will play a crucial role on Aladdin client projects in orchestrating specific technology channels that cover client activities related to data integration and data conversion or onboarding. You will be responsible for driving our project channels, providing best practice guidance to our clients’ technology teams, and supporting those teams as they build their integration and transition their data to our platform. You will work closely with the core Aladdin implementation teams to deliver on client commitments. Your expertise will be instrumental in securing an on-time and on-budget project outcome.</p>\n<p>Responsibilities</p>\n<p>Your key responsibilities will include:</p>\n<ul>\n<li><p>Own and manage Aladdin channels across Data Interfaces, Data Conversions, and Data Cloud (ADC) Deployments</p>\n</li>\n<li><p>Liaise with relevant client counterparts, including senior technology managers responsible for delivering the required integration and migration builds</p>\n</li>\n<li><p>Work with client teams to create the necessary plans and trackers, and coordinate resources and updates</p>\n</li>\n<li><p>Lead future state architecture sessions with client teams, to drive project scope and identify challenges</p>\n</li>\n<li><p>Lead and assist in interface design and data mapping sessions, support client development needs</p>\n</li>\n<li><p>Support the enablement of our clients working with Aladdin Studio technology and Aladdin data</p>\n</li>\n<li><p>Gain an in-depth knowledge of Aladdin functionality and workflows to ensure the correct integration solutions are deployed and utilized</p>\n</li>\n<li><p>Work closely with partners across the Aladdin business and platform to support client development needs</p>\n</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li><p>Bachelor’s or Master’s degree in engineering, Computer Sciences, Mathematics, or a related quantitative field</p>\n</li>\n<li><p>Fluency in English and excellent communication skills, with the ability to convey concepts clearly and simply</p>\n</li>\n<li><p>Experience with financial systems integration, cloud-based computing, or automation tools and methodologies</p>\n</li>\n<li><p>Understanding of finance and public or private markets, the investment lifecycle, and associated workflows</p>\n</li>\n<li><p>Demonstrated problem-solving prowess and a proactive mindset</p>\n</li>\n<li><p>Well-organized with strong project management capabilities</p>\n</li>\n<li><p>Ability to work collaboratively and independently within a team environment</p>\n</li>\n<li><p>Proficiency in SQL and a working knowledge of ETL processes is a must have</p>\n</li>\n<li><p>Familiarity with programming languages such as Python or R, APIs, and data transformation tools such as Alteryx Designer, is highly regarded</p>\n</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bc13ad37-8f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/g8YUpXA9aV6CnPMQkdnvSM/act-integration-solutions-specialist-%E2%80%93-aladdin-platform%2C-associate-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data integration","data conversion","Aladdin Studio platform","SQL","ETL processes","Python","R","APIs","data transformation tools"],"x-skills-preferred":["financial systems integration","cloud-based computing","automation tools and methodologies","Alteryx Designer"],"datePosted":"2026-03-09T16:42:55.570Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"data integration, data conversion, Aladdin Studio platform, SQL, ETL processes, Python, R, APIs, data transformation tools, financial systems integration, cloud-based computing, automation tools and methodologies, Alteryx Designer"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_07222a52-75c"},"title":"Data Analyst","description":"<p>We&#39;re looking for a Data Analyst to join our team, working in our Tower Bridge office three days a week. As a Data Analyst, you&#39;ll be responsible for taking ownership of the data relationship with business stakeholders, reporting on baseline performance and trends, and driving us towards a self-serve first culture.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Taking ownership of the data relationship with business stakeholders, bridging commercial and product discussions</li>\n<li>Reporting on our baseline performance and trends, keeping our finger on the pulse</li>\n<li>Driving us towards a self-serve first culture</li>\n<li>Reporting on complex experimentation</li>\n<li>Close analyst-stakeholder relationships are the most crucial component of a well-functioning data team, stakeholders should consider you part of their team</li>\n<li>Varied work across data disciplines, e.g. data product design, data modelling, analytical deep-dives, contributing to data science, AI and machine learning projects</li>\n</ul>\n<p>What does a great candidate look like</p>\n<ul>\n<li>Demonstrates curiosity and a proactive approach to learning, constantly seeking opportunities to deepen understanding and explore new ideas</li>\n<li>Exhibits a strong desire for ownership and accountability, taking the initiative to drive projects forward and deliver impactful results</li>\n<li>Skilled at translating complex business challenges into actionable data insights, driving meaningful outcomes and business impact</li>\n<li>Excellent communication skills, able to convey ideas clearly and effectively to both technical and non-technical stakeholders</li>\n<li>While recognising the value of experience, greater emphasis is placed on finding the right cultural fit and individual capabilities for the role</li>\n<li>Strives to set high standards and continually seeks opportunities for personal and team growth, fostering an environment of continuous improvement</li>\n<li>Inspired by the prospect of contributing to industry innovation and eager to participate in reimagining the future landscape</li>\n</ul>\n<p>Technical Skills</p>\n<ul>\n<li>Proficient in SQL with a proven track record of handling complex data queries and manipulation</li>\n<li>Demonstrable experience with Tableau or similar data visualisation tools, including the ability to create insightful and user-friendly dashboards and reports</li>\n<li>Experience with DBT (Data Build Tool) or similar data transformation technologies, with a keen understanding of data modelling and ETL processes</li>\n<li>A background in statistics and/or programming, with the ability to apply statistical methods and algorithms to extract insights from data, especially surrounding experimentation</li>\n<li>Familiarity with machine learning (ML) and artificial intelligence (AI) techniques, preferably in Python, R, or another programming language, enabling the development and implementation of predictive models and data-driven solutions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_07222a52-75c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zoopla","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/654B0ECCFF","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Tableau","DBT","data modelling","ETL processes"],"x-skills-preferred":["statistics","programming","machine learning","artificial intelligence"],"datePosted":"2026-03-09T16:07:09.849Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Tableau, DBT, data modelling, ETL processes, statistics, programming, machine learning, artificial intelligence"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bb7bb8e9-e31"},"title":"Data Engineer - 12 Month TFT","description":"<p>We&#39;re looking for an experienced Data Engineer to join our team at Electronic Arts. As a Data Engineer, you will collaborate with the Marketing team to implement data strategies and develop complex ETL pipelines that support dashboards for promoting deeper understanding of our business.</p>\n<p>You will have experience developing and establishing scalable, efficient, automated processes for large-scale data analyses. You will also stay informed of the latest trends and research on all aspects of data engineering and analytics.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design, implement and maintain efficient, scalable and robust data pipelines using cloud-native and open-source technologies</li>\n<li>Develop and optimize ETL/ELT processes to ingest, transform, and deliver data from diverse sources</li>\n<li>Automate deployment and monitoring of data workflows using CI/CD best practices</li>\n<li>Guide communications between our users and studio engineers to provide scalable end-to-end solutions</li>\n<li>Promote strategies to improve our data modelling, quality and architecture</li>\n<li>Participate in code reviews, mentor junior engineers, and contribute to team knowledge sharing</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>\n<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>\n<li>Experience in data modelling and tools such as dbt, ETL processes, and data warehousing</li>\n<li>Experience with at least one of the programming languages like Python, Java</li>\n<li>Experience with version control and code review tools such as Git</li>\n<li>Knowledge of latest data pipeline orchestration tools such as Airflow</li>\n<li>Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tools (e.g., Docker, Terraform, CloudFormation)</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience in gaming and working with its telemetry data or data from similar sources</li>\n<li>Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark, Iceberg</li>\n<li>Experience in developing engineering solutions based on near real-time/streaming dataset</li>\n<li>Exposure to AI/ML, MLOps concepts and collaboration with data science or AI teams.</li>\n</ul>\n<p>Pay Transparency - North America</p>\n<p>The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>\n<p>Pay Ranges: $100,000 - $139,500 CAD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bb7bb8e9-e31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-12-month-TFT/212451","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"temporary","x-salary-range":"$100,000 - $139,500 CAD","x-skills-required":["SQL","cloud-based databases","data modelling","ETL processes","data warehousing","Python","Java","Git","Airflow","cloud platforms","infrastructure-as-code tools"],"x-skills-preferred":["gaming telemetry data","big data platforms","EMR","Databricks","Kafka","Spark","Iceberg","AI/ML","MLOps"],"datePosted":"2026-03-09T10:58:20.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"TEMPORARY","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, Java, Git, Airflow, cloud platforms, infrastructure-as-code tools, gaming telemetry data, big data platforms, EMR, Databricks, Kafka, Spark, Iceberg, AI/ML, MLOps","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":139500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a8eb2e15-0bb"},"title":"Senior Business Systems Analyst, Finance Systems","description":"<p><strong>About the role</strong></p>\n<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li><strong>ERP Core Financials Implementation:</strong> Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>\n</ul>\n<ul>\n<li><strong>Financial Data Model (FDM) Design:</strong> Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>\n</ul>\n<ul>\n<li><strong>Prism Analytics Development:</strong> Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>\n</ul>\n<ul>\n<li><strong>Requirements Gathering &amp; Reporting:</strong> Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>\n</ul>\n<ul>\n<li><strong>Workshop Participation &amp; Solution Design:</strong> Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>\n</ul>\n<ul>\n<li><strong>Cross-Functional Collaboration:</strong> Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>\n</ul>\n<ul>\n<li><strong>Cutover &amp; Hypercare Planning:</strong> Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>\n</ul>\n<ul>\n<li><strong>Platform Expansion &amp; Adoption:</strong> Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>\n</ul>\n<ul>\n<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>\n</ul>\n<ul>\n<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>\n</ul>\n<ul>\n<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>\n</ul>\n<ul>\n<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>\n</ul>\n<ul>\n<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>\n</ul>\n<ul>\n<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>\n</ul>\n<ul>\n<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>\n</ul>\n<ul>\n<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>\n</ul>\n<ul>\n<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>\n</ul>\n<ul>\n<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>\n</ul>\n<ul>\n<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>\n</ul>\n<ul>\n<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>\n</ul>\n<ul>\n<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>\n</ul>\n<ul>\n<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>\n</ul>\n<ul>\n<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>\n</ul>\n<p>The annual compensation range for this role is $list</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a8eb2e15-0bb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4991194008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"list","x-skills-required":["Workday Financials","Financial Data Model (FDM)","Chart of Accounts","Worktags","Dimensional Hierarchies","Accounting Books","Prism Analytics","Data Modeling","Source Integration","Calculated Fields","Report Development","ERP Implementation","Requirements Gathering","Configuration","Testing","Data Migration","Cutover Planning","Hypercare","Financial Accounting","General Ledger","Multi-Entity Consolidation","Intercompany Accounting","Management Reporting","Stakeholder Management","Communication","Analytical Skills","Problem-Solving Skills","Data Accuracy","Integrity"],"x-skills-preferred":["Workday Accounting Center","SQL","Python","Scripting Languages","BigQuery","Cloud Data Warehouses","SOX Compliance","Internal Controls","EPM/FP&A Systems","Pigment","Anaplan","Adaptive Planning","ERP Integration","High-Growth Technology Companies","IPO Readiness","Workday HCM","HCM-Financials Integration","Data Migration Tools","ETL Processes","Reconciliation Frameworks"],"datePosted":"2026-03-08T13:51:39.354Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"Workday Financials, Financial Data Model (FDM), Chart of Accounts, Worktags, Dimensional Hierarchies, Accounting Books, Prism Analytics, Data Modeling, Source Integration, Calculated Fields, Report Development, ERP Implementation, Requirements Gathering, Configuration, Testing, Data Migration, Cutover Planning, Hypercare, Financial Accounting, General Ledger, Multi-Entity Consolidation, Intercompany Accounting, Management Reporting, Stakeholder Management, Communication, Analytical Skills, Problem-Solving Skills, Data Accuracy, Integrity, Workday Accounting Center, SQL, Python, Scripting Languages, BigQuery, Cloud Data Warehouses, SOX Compliance, Internal Controls, EPM/FP&A Systems, Pigment, Anaplan, Adaptive Planning, ERP Integration, High-Growth Technology Companies, IPO Readiness, Workday HCM, HCM-Financials Integration, Data Migration Tools, ETL Processes, Reconciliation Frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9c72720b-6af"},"title":"Research Engineer, Science of Scaling","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>Anthropic is seeking a Research Engineer/Scientist to join the Science of Scaling team, responsible for developing the next generation of large language models. In this role, you will work at the intersection of cutting-edge research and practical engineering, contributing to the development of safe, steerable, and trustworthy AI systems. You&#39;ll contribute across the entire stack, from low-level optimizations to high-level algorithm and experimental design, balancing research goals with practical engineering constraints.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Conduct research into the science of converting compute into intelligence</li>\n<li>Independently lead small research projects while collaborating with team members on larger initiatives</li>\n<li>Design, run, and analyse scientific experiments to advance our understanding of large language models</li>\n<li>Optimise training infrastructure to improve efficiency and reliability</li>\n<li>Develop dev tooling to enhance team productivity</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have significant software engineering experience and a proven track record of building complex systems</li>\n<li>Hold an advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field</li>\n<li>Are proficient in Python and experienced with deep learning frameworks</li>\n<li>Are results-oriented with a bias towards flexibility and impact</li>\n<li>Enjoy pair programming and collaborative work, and are willing to take on tasks outside your job description to support the team</li>\n<li>View research and engineering as two sides of the same coin, seeking to understand all aspects of the research program to maximise impact</li>\n<li>Care about the societal impacts of your work and have ambitious goals for AI safety and general progress</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Experience with JAX</li>\n<li>Experience with reinforcement learning</li>\n<li>Experience working on high-performance, large-scale ML systems</li>\n<li>Familiarity with accelerators, Kubernetes, and OS internals</li>\n<li>Experience with language modeling using transformer architectures</li>\n<li>Background in large-scale ETL processes</li>\n<li>Experience with distributed training at scale (thousands of accelerators)</li>\n</ul>\n<p><strong>Strong candidates need not have:</strong></p>\n<ul>\n<li>Experience in all of the above areas — we value breadth of interest and willingness to learn over checking every box</li>\n<li>Prior work specifically on language models or transformers; strong engineering fundamentals and ML knowledge transfer well</li>\n<li>An advanced degree — exceptional engineers with strong research instincts are equally encouraged to apply</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9c72720b-6af","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5126127008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£260,000 - £630,000GBP","x-skills-required":["software engineering","Python","deep learning frameworks","JAX","reinforcement learning","high-performance, large-scale ML systems","accelerators","Kubernetes","OS internals","language modeling using transformer architectures","large-scale ETL processes","distributed training at scale"],"x-skills-preferred":["JAX","reinforcement learning","high-performance, large-scale ML systems","accelerators","Kubernetes","OS internals","language modeling using transformer architectures","large-scale ETL processes","distributed training at scale"],"datePosted":"2026-03-08T13:50:55.750Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, Python, deep learning frameworks, JAX, reinforcement learning, high-performance, large-scale ML systems, accelerators, Kubernetes, OS internals, language modeling using transformer architectures, large-scale ETL processes, distributed training at scale, JAX, reinforcement learning, high-performance, large-scale ML systems, accelerators, Kubernetes, OS internals, language modeling using transformer architectures, large-scale ETL processes, distributed training at scale","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":630000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_390c02fb-0e8"},"title":"Research Engineer, Pretraining","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Conduct research and implement solutions in areas such as model architecture, algorithms, data processing, and optimizer development</li>\n</ul>\n<ul>\n<li>Independently lead small research projects while collaborating with team members on larger initiatives</li>\n</ul>\n<ul>\n<li>Design, run, and analyse scientific experiments to advance our understanding of large language models</li>\n</ul>\n<ul>\n<li>Optimise and scale our training infrastructure to improve efficiency and reliability</li>\n</ul>\n<ul>\n<li>Develop and improve dev tooling to enhance team productivity</li>\n</ul>\n<ul>\n<li>Contribute to the entire stack, from low-level optimisations to high-level model design</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>Advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field</li>\n</ul>\n<ul>\n<li>Strong software engineering skills with a proven track record of building complex systems</li>\n</ul>\n<ul>\n<li>Expertise in Python and experience with deep learning frameworks (PyTorch preferred)</li>\n</ul>\n<ul>\n<li>Familiarity with large-scale machine learning, particularly in the context of language models</li>\n</ul>\n<ul>\n<li>Ability to balance research goals with practical engineering constraints</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills and a results-oriented mindset</li>\n</ul>\n<ul>\n<li>Excellent communication skills and ability to work in a collaborative environment</li>\n</ul>\n<ul>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Preferred Experience:</strong></p>\n<ul>\n<li>Work on high-performance, large-scale ML systems</li>\n</ul>\n<ul>\n<li>Familiarity with GPUs, Kubernetes, and OS internals</li>\n</ul>\n<ul>\n<li>Experience with language modelling using transformer architectures</li>\n</ul>\n<ul>\n<li>Knowledge of reinforcement learning techniques</li>\n</ul>\n<ul>\n<li>Background in large-scale ETL processes</li>\n</ul>\n<p><strong>You&#39;ll thrive in this role if you:</strong></p>\n<ul>\n<li>Have significant software engineering experience</li>\n</ul>\n<ul>\n<li>Are results-oriented with a bias towards flexibility and impact</li>\n</ul>\n<ul>\n<li>Willingly take on tasks outside your job description to support the team</li>\n</ul>\n<ul>\n<li>Enjoy pair programming and collaborative work</li>\n</ul>\n<ul>\n<li>Are eager to learn more about machine learning research</li>\n</ul>\n<ul>\n<li>Are enthusiastic to work at an organisation that functions as a single, cohesive team pursuing large-scale AI research projects</li>\n</ul>\n<ul>\n<li>Are working to align state of the art models with human values and preferences, understand and interpret deep neural networks, or develop new models to support these areas of research</li>\n</ul>\n<ul>\n<li>View research and engineering as two sides of the same coin, and seek to understand all aspects of our research program as well as possible, to maximise the impact of your insights</li>\n</ul>\n<ul>\n<li>Have ambitious goals for AI safety and general progress in the next few years, and you’re working to create the best outcomes over the long-term.</li>\n</ul>\n<p><strong>Sample Projects:</strong></p>\n<ul>\n<li>Optimising the throughput of novel attention mechanisms</li>\n</ul>\n<ul>\n<li>Comparing compute efficiency of different Transformer variants</li>\n</ul>\n<ul>\n<li>Preparing large-scale datasets for efficient model consumption</li>\n</ul>\n<ul>\n<li>Scaling distributed training jobs to thousands of GPUs</li>\n</ul>\n<ul>\n<li>Designing fault tolerance strategies for our training infrastructure</li>\n</ul>\n<ul>\n<li>Creating interactive visualisations of model internals, such as attention patterns</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<p>At Anthropic, we are committed to fostering a diverse and inclusive workplace. We strongly encourage applications from candidates of all backgrounds, including those from underrepresented groups in tech.</p>\n<p><strong>Logistics:</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<ul>\n<li>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</li>\n</ul>\n<ul>\n<li>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from https://job-boards.greenhouse.io.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_390c02fb-0e8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5119713008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£260,000 - £630,000GBP","x-skills-required":["Python","Deep learning frameworks (PyTorch preferred)","Large-scale machine learning","Model architecture","Algorithms","Data processing","Optimizer development"],"x-skills-preferred":["GPU","Kubernetes","OS internals","Language modelling using transformer architectures","Reinforcement learning techniques","Background in large-scale ETL processes"],"datePosted":"2026-03-08T13:48:13.824Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks (PyTorch preferred), Large-scale machine learning, Model architecture, Algorithms, Data processing, Optimizer development, GPU, Kubernetes, OS internals, Language modelling using transformer architectures, Reinforcement learning techniques, Background in large-scale ETL processes","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":630000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cd4d8376-407"},"title":"Research Engineer, Pre-training","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Conduct research and implement solutions in areas such as model architecture, algorithms, data processing, and optimizer development</li>\n<li>Independently lead small research projects while collaborating with team members on larger initiatives</li>\n<li>Design, run, and analyse scientific experiments to advance our understanding of large language models</li>\n<li>Optimise and scale our training infrastructure to improve efficiency and reliability</li>\n<li>Develop and improve dev tooling to enhance team productivity</li>\n<li>Contribute to the entire stack, from low-level optimisations to high-level model design</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>Advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field</li>\n<li>Strong software engineering skills with a proven track record of building complex systems</li>\n<li>Expertise in Python and experience with deep learning frameworks (PyTorch preferred)</li>\n<li>Familiarity with large-scale machine learning, particularly in the context of language models</li>\n<li>Ability to balance research goals with practical engineering constraints</li>\n<li>Strong problem-solving skills and a results-oriented mindset</li>\n<li>Excellent communication skills and ability to work in a collaborative environment</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Preferred Experience:</strong></p>\n<ul>\n<li>Work on high-performance, large-scale ML systems</li>\n<li>Familiarity with GPUs, Kubernetes, and OS internals</li>\n<li>Experience with language modelling using transformer architectures</li>\n<li>Knowledge of reinforcement learning techniques</li>\n<li>Background in large-scale ETL processes</li>\n</ul>\n<p><strong>You&#39;ll thrive in this role if you:</strong></p>\n<ul>\n<li>Have significant software engineering experience</li>\n<li>Are results-oriented with a bias towards flexibility and impact</li>\n<li>Willingly take on tasks outside your job description to support the team</li>\n<li>Enjoy pair programming and collaborative work</li>\n<li>Are eager to learn more about machine learning research</li>\n<li>Are enthusiastic to work at an organisation that functions as a single, cohesive team pursuing large-scale AI research projects</li>\n<li>Are working to align state of the art models with human values and preferences, understand and interpret deep neural networks, or develop new models to support these areas of research</li>\n<li>View research and engineering as two sides of the same coin, and seek to understand all aspects of our research program as well as possible, to maximise the impact of your insights</li>\n<li>Have ambitious goals for AI safety and general progress in the next few years, and you’re working to create the best outcomes over the long-term.</li>\n</ul>\n<p><strong>Sample Projects:</strong></p>\n<ul>\n<li>Optimising the throughput of novel attention mechanisms</li>\n<li>Comparing compute efficiency of different Transformer variants</li>\n<li>Preparing large-scale datasets for efficient model consumption</li>\n<li>Scaling distributed training jobs to thousands of GPUs</li>\n<li>Designing fault tolerance strategies for our training infrastructure</li>\n<li>Creating interactive visualisations of model internals, such as attention patterns</li>\n</ul>\n<p><strong>At Anthropic, we are committed to fostering a diverse and inclusive workplace. We strongly encourage applications from candidates of all backgrounds, including those from underrepresented groups in tech.</strong></p>\n<p><strong>If you&#39;re excited about pushing the boundaries of AI while prioritising safety and ethics, we want to hear from you!</strong></p>\n<p><strong>The annual compensation range for this role is listed below.</strong></p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p><strong>Annual Salary:</strong></p>\n<p>$350,000 - $850,000USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cd4d8376-407","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4616971008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $850,000USD","x-skills-required":["Python","Deep learning frameworks (PyTorch preferred)","Large-scale machine learning","Model architecture","Algorithms","Data processing","Optimizer development"],"x-skills-preferred":["GPU","Kubernetes","OS internals","Language modelling using transformer architectures","Reinforcement learning techniques","Background in large-scale ETL processes"],"datePosted":"2026-03-08T13:46:36.524Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Seattle, WA, New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning frameworks (PyTorch preferred), Large-scale machine learning, Model architecture, Algorithms, Data processing, Optimizer development, GPU, Kubernetes, OS internals, Language modelling using transformer architectures, Reinforcement learning techniques, Background in large-scale ETL processes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_65d17f63-93d"},"title":"Software Engineer","description":"<p><strong>About the role</strong></p>\n<p>We&#39;re hiring a Software Engineer for our User Operations team to build the internal tools, data infrastructure, and AI-powered systems that make our support team the most efficient in the world. You&#39;ll work directly with our operations teams to design, implement, and scale the systems that enable us to serve millions of developers. You’ll work on everything from ticket routing optimisation to analytics dashboards to AI agents that automate workflows end-to-end.</p>\n<p>This is an internal-facing role with exponential impact on our users. You&#39;ll be building the infrastructure that makes every person on the User Ops team faster, smarter, more empathetic, and more effective. If you&#39;ve ever looked at a manual process and thought &#39;I could automate that,&#39; this role might suit you well.</p>\n<p><strong>What you’ll do</strong></p>\n<ul>\n<li>Design and build internal tools and applications that empower User Ops to deliver exceptional support at scale.</li>\n</ul>\n<ul>\n<li>Build and maintain data pipelines and dashboards that surface actionable insights on ticket volume, resolution times, and support quality.</li>\n</ul>\n<ul>\n<li>Optimise ticket routing and triage systems to get the right issues to the right people faster.</li>\n</ul>\n<ul>\n<li>Develop AI-powered automations that deflect tickets, assist agents, improve documentation, and expand knowledge for all customer-facing Cursor teams</li>\n</ul>\n<ul>\n<li>Build integrations across our tooling ecosystem (ticketing systems, payment processes, CRM, data warehouse, etc.) to create seamless workflows.</li>\n</ul>\n<ul>\n<li>Partner with Technical Support Engineers and Community Support Engineers to identify pain points and ship solutions quickly.</li>\n</ul>\n<p><strong>You may be a fit if</strong></p>\n<ul>\n<li>Experience building internal tools, developer platforms, or operations infrastructure.</li>\n</ul>\n<ul>\n<li>Strong full-stack engineering skills—you&#39;re comfortable owning projects from database to UI.</li>\n</ul>\n<ul>\n<li>Experience with data pipelines, ETL processes, and analytics tooling.</li>\n</ul>\n<ul>\n<li>Familiarity with Cursor and AI-assisted development workflows.</li>\n</ul>\n<ul>\n<li>Experience integrating with third-party platforms (ticketing systems, payment processors, CRMs).</li>\n</ul>\n<ul>\n<li>Strong sense of ownership and a bias for action—you ship fast and iterate based on feedback.</li>\n</ul>\n<ul>\n<li>Self-starter with curiosity, creativity, and a distaste for manual toil.</li>\n</ul>\n<p>Name Email ↥ Upload file LinkedIn URL GitHub Profile</p>\n<p>Please write a short note on a project you&#39;re proud of:</p>\n<p>Will you now or in the future require visa sponsorship to work in the country where this position is located?</p>\n<p>Has someone at Cursor referred you for this role? If so, please include their email here</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_65d17f63-93d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cursor","sameAs":"https://cursor.com","logo":"https://logos.yubhub.co/cursor.com.png"},"x-apply-url":"https://cursor.com/careers/software-engineer-user-operations","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["full-stack engineering","data pipelines","ETL processes","analytics tooling","Cursor","AI-assisted development workflows","third-party platforms"],"x-skills-preferred":["internal tools","developer platforms","operations infrastructure"],"datePosted":"2026-03-08T00:19:08.445Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"full-stack engineering, data pipelines, ETL processes, analytics tooling, Cursor, AI-assisted development workflows, third-party platforms, internal tools, developer platforms, operations infrastructure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_03b5d4bd-eb5"},"title":"Data Engineer II - Mobile Growth","description":"<p>As a Data Engineer II - Mobile Growth, you will support some of our largest game titles by helping us understand player engagement and measure the effectiveness of our marketing efforts. We are looking for an experienced Data Engineer with broad technical skills and ability to work with large amounts of data.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>You will work with analysts, understand requirements, and develop technical specifications for ETLs</p>\n<ul>\n<li>You will implement efficient, scalable and reliable data pipelines to move and transform data.</li>\n<li>You will promote strategies to improve our data modelling, quality and architecture</li>\n<li>You will work with big data solutions, ETL pipelines and dashboard tools.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>\n<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_03b5d4bd-eb5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II-Mobile-Growth/211355","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$119,600 - $167,300 CAD","x-skills-required":["SQL","cloud-based databases","data modelling","ETL processes","data warehousing","Python","data pipeline tools","version control systems"],"x-skills-preferred":["containerization","orchestration technologies"],"datePosted":"2026-01-05T21:04:04.804Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, data pipeline tools, version control systems, containerization, orchestration technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119600,"maxValue":167300,"unitText":"YEAR"}}}]}