{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/quality-frameworks"},"x-facet":{"type":"skill","slug":"quality-frameworks","display":"Quality Frameworks","count":13},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c2e4b402-b6a"},"title":"Senior Content Designer","description":"<p>We&#39;re looking for a senior content designer to help define the future of communication in Dropbox. This role goes beyond UX writing: You&#39;ll shape how humans and AI interact across the product.</p>\n<p>You&#39;ll drive content strategy, UX writing, and language systems for core Dropbox experiences, including AI-powered features used by millions of people every day. Embedded within the Core Design team, you&#39;ll work closely with product designers, product managers, engineers, and researchers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define how Dropbox communicates across human- and AI-generated experiences, setting the standard for clarity, tone, and trust</li>\n<li>Design prompt frameworks, response patterns, and interaction models that power AI features</li>\n<li>Build reusable language systems and content architecture that support high-quality content at scale</li>\n<li>Establish content governance for AI-generated experiences to ensure consistency, accuracy, and privacy</li>\n<li>Partner with product design to shape mental models, flows, and information architecture</li>\n<li>Use data, research, and experimentation to continuously improve content performance and system outputs</li>\n<li>Collaborate closely with designers, engineers, product managers, researchers, and product marketing managers</li>\n<li>Raise the bar for content design by contributing to team-wide systems, critiques, and craft evolution</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of experience in content design or UX writing</li>\n<li>Experience working on or adjacent to AI-powered, automated, or conversational interfaces (i.e., prompts, generated content, assistants)</li>\n<li>Strong systems thinking: experience creating scalable frameworks like structured content, or design systems</li>\n<li>Ability to translate complex technical concepts into clear, human-centered communication</li>\n<li>Experience influencing product direction and collaborating deeply with design, product, and engineering partners</li>\n<li>Demonstrated ability to prioritize high-impact work in ambiguous, fast-evolving spaces</li>\n<li>A portfolio of work that shows both craft and systems thinking (please include this with your application)</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience defining or scaling language systems for large, complex products</li>\n<li>Experience designing for AI interaction models (i.e., assistants or generative interfaces)</li>\n<li>Experience with content automation, structured content, or programmatic content generation</li>\n<li>Experience establishing governance models or quality frameworks for content at scale</li>\n<li>Strong track record of driving measurable impact through content design</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2: $143,800-$194,600 USD US Zone 3: $127,800-$173,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c2e4b402-b6a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7712835","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,800-$194,600 USD (US Zone 2), $127,800-$173,000 USD (US Zone 3)","x-skills-required":["content design","UX writing","AI-powered interfaces","structured content","design systems"],"x-skills-preferred":["language systems","content automation","programmatic content generation","governance models","quality frameworks"],"datePosted":"2026-04-18T15:54:04.304Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Design","industry":"Technology","skills":"content design, UX writing, AI-powered interfaces, structured content, design systems, language systems, content automation, programmatic content generation, governance models, quality frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":127800,"maxValue":194600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1e09f714-7db"},"title":"Analytics Engineer, FinTech","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other internet properties, from individual bloggers to Fortune 500 companies, protecting and accelerating them without adding hardware, installing software, or changing a line of code.</p>\n<p>Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. Cloudflare was named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>\n<p>The FinTech Data Science team is central to Cloudflare&#39;s innovation and harnesses the massive amount of data generated by our network. We cover a broad scope, from optimizing Billing and Revenue operations to detecting Fraud, and possess a unique opportunity to use these insights to discover new products or transform existing ones.</p>\n<p>About the Role</p>\n<p>We are looking for an Analytics Engineer to join our FinTech Data Science team who cares deeply about data quality and usability. Sitting at the intersection of data engineering and analysis, you will be the architect of our data layer. While our Data Scientists focus on automating decisions, you will focus on the &#39;truth&#39; of the data , ensuring that the tables and dashboards powering our decisions are accurate, accessible, documented, and reliable.</p>\n<p>You will transform raw tables into canonical data models and own the presentation layer that leadership uses to monitor the health of our business. If you are excited to build the foundational data infrastructure that powers a multi-billion dollar fintech operation, we would love to hear from you!</p>\n<p>Day-to-day responsibilities include:</p>\n<ul>\n<li>Build out the canonical data schema for FinTech and related organizations by designing and maintaining well-structured, modular, and user-friendly data tables.</li>\n</ul>\n<ul>\n<li>Design, develop, deploy, and operate high-quality production ELT pipelines and data architectures, integrating data from various sources and formats.</li>\n</ul>\n<ul>\n<li>Architect and maintain the presentation layer in BI tools (e.g., Looker/Superset) to ensure dashboards are performant and provide a seamless self-serve experience.</li>\n</ul>\n<ul>\n<li>Act as a strategic partner to stakeholders by translating vague business questions into concrete technical solutions that drive business value.</li>\n</ul>\n<ul>\n<li>Ensure data is accurate, complete, and timely by implementing robust testing, monitoring, and validation protocols for your code and data.</li>\n</ul>\n<ul>\n<li>Establish and share best practices in performance, code quality, data governance, and discoverability while participating in mentoring initiatives.</li>\n</ul>\n<p>Required skills, knowledge, and experience:</p>\n<ul>\n<li>5+ years of experience in Analytics Engineering, Data Engineering, or related roles working with big data at scale.</li>\n</ul>\n<ul>\n<li>Expert-level SQL and proficiency in a high-level scripting language (e.g., Python, R, or Scala) for data automation and manipulation.</li>\n</ul>\n<ul>\n<li>Experience with workflow management tools (e.g., Airflow) to schedule and monitor complex data pipelines.</li>\n</ul>\n<ul>\n<li>Strong experience with dbt or similar frameworks for transforming data in the warehouse.</li>\n</ul>\n<ul>\n<li>Deep experience with BI tools (e.g., Looker, Superset, or Grafana) and a strong understanding of how to structure data for downstream consumption.</li>\n</ul>\n<ul>\n<li>Solid foundation in software best practices, including version control (Git), CI/CD, and data testing/quality frameworks.</li>\n</ul>\n<ul>\n<li>Ability to operate comfortably in a fast-paced environment and take ownership of projects with minimal oversight.</li>\n</ul>\n<ul>\n<li>Excellent communication skills with the ability to bridge the gap between technical engineering terms and business requirements.</li>\n</ul>\n<ul>\n<li>A learning mindset and exceptional curiosity,eagerly diving into new domains and bringing informed ideas to the table.</li>\n</ul>\n<p>Bonus Points</p>\n<p>Experience in FinTech</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers,at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use,it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal,we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1e09f714-7db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7649684","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","R","Scala","Airflow","dbt","Looker","Superset","Grafana","Git","CI/CD","data testing/quality frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:02.907Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, R, Scala, Airflow, dbt, Looker, Superset, Grafana, Git, CI/CD, data testing/quality frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cf464643-638"},"title":"Compliance Program Manager","description":"<p>As a Compliance Program Manager for DCMA engagement and contract compliance, you will lead engagement with the Defense Contract Management Agency (DCMA) and ensure audit readiness, contract integrity, and compliance across autonomous aircraft, drones, ground control systems, and integrated government-owned equipment.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working with program teams to develop strategies to meet DCMA requirements</li>\n<li>Acting as primary liaison with DCMA personnel, managing correspondence, audits, and site interactions for programs under your purview</li>\n<li>Maintaining a clear, current, and strategic understanding of all DCMA engagements, site visits, and their connection to contract deliverables and overarching Anduril strategy</li>\n<li>Cultivating and sustaining a high-trust relationship with DCMA personnel through proactive, transparent communication</li>\n</ul>\n<p>The ideal candidate will have a strong understanding of DCMA relationship management, executive risk communication, and internal program support &amp; compliance readiness reviews.</p>\n<p>Required qualifications include:</p>\n<ul>\n<li>Bachelor’s degree in Engineering, Business Administration, Operations Management, or similar field with 5-7 years of direct DCMA engagement</li>\n<li>Outstanding Program Management skills combined with the proven ability to operate in a nebulous environment</li>\n<li>Strong understanding of FAR/DFARS, AS9100, and defense quality frameworks</li>\n<li>Experience in Aerospace, Defense, Engineering, Manufacturing, or Quality in aircraft product development and complex production process management</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Master’s Degree in Engineering or MBA</li>\n<li>2+ Years of Management Consulting Experience</li>\n<li>5-7 years of experience of process development based on federal regulatory requirements</li>\n</ul>\n<p>The salary range for this role is $146,000-$194,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cf464643-638","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5050610007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$146,000-$194,000 USD","x-skills-required":["DCMA relationship management","Executive risk communication","Internal program support & compliance readiness reviews","Program Management","FAR/DFARS","AS9100","Defense quality frameworks","Aerospace","Defense","Engineering","Manufacturing","Quality"],"x-skills-preferred":["Management Consulting","Process development","Federal regulatory requirements"],"datePosted":"2026-04-18T15:51:19.613Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"DCMA relationship management, Executive risk communication, Internal program support & compliance readiness reviews, Program Management, FAR/DFARS, AS9100, Defense quality frameworks, Aerospace, Defense, Engineering, Manufacturing, Quality, Management Consulting, Process development, Federal regulatory requirements","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":146000,"maxValue":194000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18b6c565-7bb"},"title":"Sr. Software Development Engineer in Test","description":"<p>About Dialpad ---------------- Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>\n<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>\n<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyse conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time. Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do.</p>\n<p>Visit dialpad.com to learn more.</p>\n<p>Being a Dialer --------------- AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>\n<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>\n<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>\n<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>\n<p>Your role -------- As a Sr. SDET in Agentic QA, you will own the test automation and quality frameworks that support Dialpad’s AI Voice Agent services.</p>\n<p>You will develop automated tests for end-to-end product experiences, from frontend UI to backend services to APIs to audio/text interactions.</p>\n<p>You will test orchestration flows, agent configuration experiences, and guardian safeguards to create robust automated coverage for functionality, performance, reliability, UX, and more.</p>\n<p>In this role, you will develop substantial amounts of automated test infrastructure and partner deeply with the development team to make our fast-growing AI platform more testable, more stable, and more delightful for customers.</p>\n<p>This position is based at one of Dialpad’s Canadian offices and reports to a QA Eng Manager in the United States.</p>\n<p>What you’ll do ------------</p>\n<ul>\n<li>Own end-to-end quality for agentic features and workflows, including strategy, development, execution, and release qualification.</li>\n<li>Design and build automation tooling and frameworks for AI/LLM-driven systems, including prompt flows, agent orchestration, and tool integrations.</li>\n<li>Develop and maintain evaluation frameworks (evals) to measure response quality, accuracy, and hallucination rates.</li>\n<li>Drive automation coverage (80%+ for critical AI workflows) using deterministic + probabilistic validation approaches.</li>\n<li>Integrate AI quality checks into CI/CD pipelines with fast feedback cycles (</li>\n<li>Build tooling for LLM observability and debugging, including prompt tracing and response analysis.</li>\n<li>Partner with Applied AI teams on prompt engineering, model selection, and evaluation strategies.</li>\n<li>Design and execute performance and load tests for AI services (latency, throughput, cost efficiency).</li>\n<li>Identify and mitigate risks related to hallucinations, bias, safety, and edge cases.</li>\n<li>Define and track AI quality KPIs (task success rates, precision/recall, latency, etc.).</li>\n<li>Participate in design and architecture reviews to ensure systems are testable, observable, and resilient.</li>\n<li>Mentor engineers and contribute to raising the bar on AI quality engineering practices.</li>\n</ul>\n<p>What you’ll bring --------------</p>\n<ul>\n<li>5+ years of experience in software engineering or SDET roles with an emphasis on software development.</li>\n<li>Strong programming skills in Python (preferred), Java, or JavaScript.</li>\n<li>Experience testing distributed, cloud-native SaaS systems and APIs.</li>\n<li>Demonstrated proficiency in coding with AI agents to accelerate development and improve code quality.</li>\n<li>Hands-on exposure to LLMs or AI/ML systems (e.g., OpenAI, Claude, Gemini, or similar platforms).</li>\n<li>Understanding of non-deterministic systems and probabilistic testing approaches.</li>\n<li>Experience building test frameworks and scalable automation systems.</li>\n<li>Familiarity with AI evaluation techniques (benchmarking, golden datasets, human-in-the-loop validation).</li>\n<li>Experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions).</li>\n<li>Strong collaboration skills with the ability to work across distributed teams and time zones.</li>\n<li>Bachelor’s degree in Computer Science or equivalent practical experience.</li>\n</ul>\n<p>Backend: Python, Go, Google Cloud Platform, Cloud Run / App Engine, Kubernetes, Datastore, Redis, ElasticSearch.</p>\n<ul>\n<li>Frontend: Vue3, React.</li>\n<li>AI Stack: LLM APIs, LiveKit, prompt orchestration frameworks, evaluation tooling.</li>\n</ul>\n<p>For exceptional talent based in British Columbia, Canada the target base salary range for this position is $150,500-$175,250 CAD.</p>\n<p>Why Join Dialpad ---------------</p>\n<ul>\n<li>Work at the center of the AI transformation in business communications.</li>\n<li>Build and ship agentic AI products that are redefining how companies operate.</li>\n<li>Join a team where AI amplifies every employee’s impact.</li>\n<li>Competitive salary, comprehensive benefits, and real opportunities for growth.</li>\n</ul>\n<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>\n<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>\n<p>Our exceptional culture, repeatedly recognized as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>\n<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>\n<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18b6c565-7bb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8475155002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$150,500-$175,250 CAD","x-skills-required":["Python","Java","JavaScript","Test automation","Quality frameworks","Agentic AI","Voice Agent services","Orchestration flows","Agent configuration experiences","Guardian safeguards","Functional testing","Performance testing","Reliability testing","UX testing","Cloud-native SaaS systems","APIs","LLMs","AI/ML systems","Non-deterministic systems","Probabilistic testing","Test frameworks","Scalable automation systems","CI/CD pipelines","Jenkins","GitHub Actions","Collaboration","Distributed teams","Time zones","Computer Science","Google Cloud Platform","Cloud Run","App Engine","Kubernetes","Datastore","Redis","ElasticSearch","Vue3","React","LLM APIs","LiveKit","Prompt orchestration frameworks","Evaluation tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:44.303Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, JavaScript, Test automation, Quality frameworks, Agentic AI, Voice Agent services, Orchestration flows, Agent configuration experiences, Guardian safeguards, Functional testing, Performance testing, Reliability testing, UX testing, Cloud-native SaaS systems, APIs, LLMs, AI/ML systems, Non-deterministic systems, Probabilistic testing, Test frameworks, Scalable automation systems, CI/CD pipelines, Jenkins, GitHub Actions, Collaboration, Distributed teams, Time zones, Computer Science, Google Cloud Platform, Cloud Run, App Engine, Kubernetes, Datastore, Redis, ElasticSearch, Vue3, React, LLM APIs, LiveKit, Prompt orchestration frameworks, Evaluation tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150500,"maxValue":175250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_41a793fc-9ff"},"title":"Staff Data Analyst (Bengaluru)","description":"<p>We are looking for an experienced data analyst to join Okta&#39;s enterprise data team. The successful candidate will have a strong background in financial and business performance analytics, and a proven track record of proactively identifying and solving complex business problems through data.</p>\n<p>In this role, you will be focusing on Finance data and reporting, partnering with Finance, Accounting, Sales Operations, and Executive Leadership to implement enhancements and build end-to-end financial insights across the organization.</p>\n<p>Responsibilities: Proactively partner with Finance and Accounting leadership to set the analytics roadmap and identify high-impact opportunities for data to drive business value. Serve as a trusted advisor to senior Finance and business stakeholders, influencing their strategy and decision-making through data-driven narratives. Translate ambiguous business questions into clear, structured analytical requirements and measurable project plans. Partner with Finance and Operations teams to provide best practices in financial metric definition, dashboard design, and modeling. Conduct deep-dive, root-cause analyses on performance variances, translating complex data into clear, strategic recommendations. Design and build scalable data models to support enterprise-wide financial reporting. Own the entire lifecycle of financial data products, from initial concept to driving adoption and measuring business impact. Enable self-service data consumption for business users. Develop and champion new analytical methods and tools to continuously improve financial reporting and decision-making processes. Work with Data Engineering to define, implement, and build new data sources and transformations.</p>\n<p>Requirements: 8+ years&#39; experience as a Data Analyst 6+ years&#39; hands-on SQL experience in a work environment Expertise in developing and maintaining complex financial models, including scenario planning and predictive forecasting, and analysis. Experience with and had built large scale data models (e.g., using dbt or Airflow), including proven experience in modeling intricate financial metrics. Experience with data management, documenting processes and data flows, and ensuring data quality. Familiarity with data quality frameworks and monitoring tools. Experience with building AI-ready data and semantic layers. Experience with building reports and visualizations to represent data intuitively in Tableau or similar data visualization tools. Exceptional communication, presentation, and storytelling skills, with the ability to convey complex analytical findings to executive audiences. Demonstrated ability to operate independently and execute projects with minimal supervision. Experience with ETL processes, software development, and lifecycle awareness. Familiarity with data governance and report/data catalog applications (Collibra, Aliation, Data.world).</p>\n<p>The Okta Experience Supporting Your Well-Being Driving Social Impact Developing Talent and Fostering Connection + Community</p>\n<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_41a793fc-9ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7645984","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data management","financial modeling","data quality frameworks","data visualization","ETL processes","software development","data governance"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:32.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"SQL, data management, financial modeling, data quality frameworks, data visualization, ETL processes, software development, data governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1bf9410f-449"},"title":"Certification Development Lead","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking a Certification Development Lead to build and scale a best-in-class certification program for our partner ecosystem.</p>\n<p>We&#39;ve launched our first certification, the Claude Certified Architect, Foundations (CCA-F), and now we need someone to turn that first credential into a comprehensive, multi-tiered program that our systems integrator, consulting, and technology partners rely on to demonstrate real competence with Claude.</p>\n<p>This is a 0-to-1 build. You&#39;ll define the certification strategy, design the credential architecture, and build the program infrastructure from the ground up, all with our partner audience at the center.</p>\n<p>You&#39;ll work at the intersection of program design and subject matter expertise: identifying what partners need to know, structuring the path to get them there, and collaborating with internal SMEs to ensure the content reflects how Claude actually works and where it&#39;s headed.</p>\n<p>The systems and tooling that deliver certifications are being built by a separate team; your focus is the program itself, the credentialing framework, the partner experience, and the cross-functional relationships that keep it all connected.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the end-to-end certification program: define the vision, build the credentialing framework, and establish the multi-tiered certification path that scales with Anthropic’s partner ecosystem</li>\n<li>Design certification architectures, competency frameworks, and assessment methodologies that measure real applied skill, not just content exposure</li>\n<li>Build with the partner audience in mind: understand the needs, workflows, and incentives of SI, consulting, and technology partners, and design credentialing experiences that meet them where they are</li>\n<li>Collaborate with internal subject matter experts across Applied AI, Product, and Engineering to translate Claude’s evolving capabilities into rigorous, current certification content</li>\n<li>Partner with trainers and to ensure alignment between training delivery and credentialing requirements, so the learning journey and credential path reinforce each other</li>\n<li>Work cross-functionally with Partnerships, GTM Productivity, and Revenue Operations to connect certification outcomes to partner performance, pipeline attribution, and ecosystem health metrics</li>\n<li>Define and manage relationships with credentialing vendors and collaborate with the systems owner on platform and delivery infrastructure decisions</li>\n<li>Establish quality standards for assessments including validity, reliability, and fairness, and use assessment data to continuously improve the program</li>\n<li>Build the operational processes around certification lifecycle management: development, launch, renewal, versioning, and retirement of credentials as Claude’s capabilities evolve</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>7+ years in certification program design, credentialing, developer education, or partner enablement, with demonstrated experience building programs from scratch, not just maintaining existing ones</li>\n<li>A track record of 0-to-1 builds: you’ve stood up a certification or credentialing program where one didn’t exist before, defined the strategy, earned stakeholder buy-in, and shipped it</li>\n<li>Deep understanding of partner ecosystems, particularly SI and consulting partner audiences, and the ability to design certification experiences that drive real adoption and competency within those organizations</li>\n<li>Strong educational design instincts: you think in terms of competency frameworks, learning progressions, and what it actually means to demonstrate mastery versus having been exposed to content</li>\n<li>Experience with assessment methodologies beyond multiple-choice, including performance-based assessment, portfolio evaluation, or scenario-based testing that measures applied skill</li>\n<li>Baseline fluency with AI and technology products, with the ability and curiosity to learn Claude’s product suite deeply enough to articulate the “why” behind each certification tier and competency requirement</li>\n<li>Clear, structured communication and the ability to work with non-education stakeholders (partnerships, sales, engineering) while maintaining rigorous program standards</li>\n<li>Comfort operating in ambiguous, fast-moving environments, making decisions with incomplete information, and iterating quickly based on partner and internal feedback</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience managing relationships with credentialing vendors (Pearson, Credly, or similar) and navigating the operational complexity of third-party certification delivery</li>\n<li>Familiarity with certification program economics: pricing, packaging, and the business model decisions that make a program sustainable</li>\n<li>Background in technical credentialing specifically, such as cloud certifications (AWS, GCP, Azure) or developer platform credentials</li>\n<li>Knowledge of psychometrics, credential validation, or assessment quality frameworks</li>\n<li>Experience benchmarking and learning from best-in-class programs from peer organizations</li>\n<li>Prior work at a high-growth technology company navigating rapid product change</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Annual compensation range: $270,000-$365,000 USD</li>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: 7+ years</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time</li>\n<li>Visa sponsorship: We do sponsor visas!</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1bf9410f-449","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5097348008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$270,000-$365,000 USD","x-skills-required":["certification program design","credentialing","developer education","partner enablement","assessment methodologies","AI and technology products","clear communication","structured communication"],"x-skills-preferred":["credentialing vendors","certification program economics","technical credentialing","psychometrics","credential validation","assessment quality frameworks"],"datePosted":"2026-04-18T15:40:50.492Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, NY; San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"certification program design, credentialing, developer education, partner enablement, assessment methodologies, AI and technology products, clear communication, structured communication, credentialing vendors, certification program economics, technical credentialing, psychometrics, credential validation, assessment quality frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":270000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_03cafe1e-283"},"title":"Head of Support","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>\n<p>As Head of Support, you will own our global support strategy and outcomes across customer and consumer support, leading a distributed team that works across channels, products, and industries. You&#39;ll be responsible for uniting our customer and consumer support motions, evolving our Customer Success Package business, and using support as a strategic lever to influence product quality, roadmap, and Plaid&#39;s brand in the market.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the global support strategy and outcomes across SLAs, CSAT, revenue, and support quality.</li>\n<li>Unite our customer and consumer support teams into a single, high-performing organization that is a true differentiator for Plaid.</li>\n<li>Lead, grow, and coach managers and ICs across regions and time zones; drive performance, calibration, and quality programs at scale.</li>\n<li>Manage critical incidents and executive-level escalations in tight partnership with Product, Engineering, Risk, Compliance, GTM, and CS Ops, including post-incident reviews and process fixes.</li>\n<li>Evolve support operations, tooling, and knowledge management to drive efficiency, deflection, and consistent, high-quality experiences for customers and consumers.</li>\n<li>Own the Customer Success Package business, balancing COGS, revenue, and customer experience</li>\n<li>Regularly report on support health and align plans and tradeoffs with Plaid&#39;s executive team and other stakeholders</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>10+ years in technical/customer support with at least 5+ years leading managers (manager-of-managers) in a scaling B2B SaaS or API company.</li>\n<li>3+ years running global support operations with measurable improvements in SLAs, CSAT/CES and quality.</li>\n<li>Background in fintech, payments, or developer/API platforms operating at significant scale, preferred</li>\n<li>Proven success owning support outcomes at scale, including incident management and executive-level escalations</li>\n<li>Deep experience building and leading distributed teams, with strong hiring, coaching, and performance management muscles across regions and time zones.</li>\n<li>Strong operational rigor: metrics design, forecasting and capacity planning, process improvement, and support tooling strategy.</li>\n<li>Demonstrated ability to partner with GTM, Product, and Engineering to influence roadmaps and improve product quality through support insights.</li>\n<li>Experience using AI and building content/deflection programs and quality frameworks.</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_03cafe1e-283","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/31d1ef5f-c05a-4c71-8346-2f348d702e98","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD 124,800-223,200 per-year-salary","x-skills-required":["technical/customer support","global support operations","fintech, payments, or developer/API platforms","incident management","distributed teams","operational rigor","AI","content/deflection programs","quality frameworks"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:51:36.800Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Fintech","skills":"technical/customer support, global support operations, fintech, payments, or developer/API platforms, incident management, distributed teams, operational rigor, AI, content/deflection programs, quality frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":124800,"maxValue":223200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c431665-20b"},"title":"Data Governance and Management Lead","description":"<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. We are seeking a Data Governance &amp; Management Lead within the Global Analytics team to help develop and implement data controls, data quality standards, and governance practices across the platform.</p>\n<p>This role supports data integrity, metadata, and access controls to help ensure data is accurate, consistent, and fit for purpose. This is a hands-on role that requires strong technical fluency, structured problem-solving, and the ability to translate governance requirements into practical implementations within data systems.</p>\n<p><strong>Technical Skills:</strong></p>\n<ul>\n<li>Working knowledge of data governance, data management, and data quality frameworks</li>\n<li>Experience supporting the implementation of data controls within data pipelines and reporting systems</li>\n<li>Advanced proficiency in SQL, Python, or other data query and analysis tools</li>\n<li>Proficiency with business intelligence and data visualization tools such as Looker, Power BI, or Tableau</li>\n<li>Experience with database design, including understanding complex data schemas and data extraction</li>\n<li>Familiarity with data lineage, metadata management, and data modeling concepts</li>\n<li>Ability to define and implement data quality rules and validation checks</li>\n<li>Understanding of data access principles, including role-based access and data classification</li>\n<li>Ability to document data processes and controls clearly and in a structured way</li>\n</ul>\n<p><strong>Complexity and Impact of Work:</strong></p>\n<ul>\n<li>Oversee the data governance program, identify improvement areas, and implement best practices to enhance data quality, integrity, and security</li>\n<li>Develop and implement data quality standards and monitoring processes, including establishing data quality metrics and thresholds</li>\n<li>Assist in managing the data issue lifecycle, including tracking and supporting remediation efforts</li>\n<li>Manage the data governance platform (Atlan) and serve as the primary subject matter expert</li>\n<li>Assist in data classification efforts, including identifying and categorizing sensitive data and critical data elements</li>\n<li>Manage external data requests, including regulatory inquiries, ensuring compliance with banking regulations</li>\n<li>Monitor and report on key data governance metrics and KPIs, providing insights and recommendations to senior management</li>\n<li>Lead data governance meetings and workshops, facilitating discussions and decision-making to drive the data governance program forward</li>\n</ul>\n<p><strong>Organizational Knowledge:</strong></p>\n<ul>\n<li>Have a deep understanding of Anchorage Digital’s strategy and business lines.</li>\n<li>Understand how data supports decision-making and operational processes across the organization</li>\n<li>Possess strategic thinking and vision, with the ability to develop and implement a comprehensive data governance strategy aligned with organizational goals and objectives</li>\n</ul>\n<p><strong>Communication and Influence:</strong></p>\n<ul>\n<li>Able to communicate complex issues clearly and credibly to a wide range of audiences.</li>\n<li>Document data processes, controls, and findings clearly for internal stakeholders</li>\n<li>Build effective relationships and rapport with stakeholders, including cross-functional and external partners</li>\n<li>Communicate, organize, and execute cross-team goals and projects, leveraging relationships and resources to solve problems</li>\n<li>Collaborate with Data Platform, InfoSec, Product, and Engineering partners</li>\n</ul>\n<p><strong>You may be a fit for this role if you have:</strong></p>\n<ul>\n<li>Bachelor’s degree required. Advanced degrees or certifications in data analytics or governance preferred</li>\n<li>4–7 years of experience in data governance, data management, data quality, or data analytics</li>\n<li>Hands-on experience implementing or supporting data quality and governance practices</li>\n<li>Experience managing data classification, access controls, and external data requests</li>\n<li>Experience working with data pipelines, reporting systems, or analytical datasets</li>\n<li>Experience writing, editing, or reviewing technical documentation for regulatory or banking contexts</li>\n<li>Strong attention to detail, with a focus on accuracy, completeness, and consistency in data governance processes and controls</li>\n<li>Ability to work independently on defined tasks and contribute to team objectives</li>\n<li>Strong problem-solving skills and comfort working in structured, detail-oriented environments</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if:</strong></p>\n<ul>\n<li>You&#39;ve kept up to date with the proliferation of blockchain and crypto innovations.</li>\n<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system. :)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c431665-20b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/5bfbd64c-933e-418c-9c07-5aea50212c0d","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data governance","data management","data quality frameworks","SQL","Python","Looker","Power BI","Tableau","database design","data lineage","metadata management","data modeling","data access principles","role-based access","data classification"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:22:29.501Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"data governance, data management, data quality frameworks, SQL, Python, Looker, Power BI, Tableau, database design, data lineage, metadata management, data modeling, data access principles, role-based access, data classification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3d849fbc-058"},"title":"Member of Product, Data Platform","description":"<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>\n<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>\n<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>\n<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>\n<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>\n<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>\n<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>\n<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>\n<p><strong>Technical Skills:</strong></p>\n<ul>\n<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>\n<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>\n<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>\n<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>\n<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>\n<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>\n</ul>\n<p><strong>Complexity and Impact of Work:</strong></p>\n<ul>\n<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>\n<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>\n<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>\n<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>\n</ul>\n<p><strong>Organizational Knowledge:</strong></p>\n<ul>\n<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>\n<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>\n<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>\n<li>Drive company objectives through strategic data platform decisions and initiatives.</li>\n</ul>\n<p><strong>Communication and Influence:</strong></p>\n<ul>\n<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>\n<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>\n<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>\n<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>\n</ul>\n<p><strong>You may be a fit for this role if you:</strong></p>\n<ul>\n<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>\n<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>\n<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>\n<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>\n<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>\n<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>\n<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>\n<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if you have:</strong></p>\n<ul>\n<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>\n<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>\n<li>You have experience supporting clients with data-related issues or concerns.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3d849fbc-058","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data platforms","data infrastructure","data-intensive enterprise products","data warehousing","data lakes","ETL/ELT pipelines","modern data stack tooling","Snowflake","Databricks","dbt","Airflow","Spark","data modeling","data governance","data quality frameworks","blockchain or on-chain data"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:18:21.529Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b5258a48-495"},"title":"Senior Data and Applied Scientist","description":"<p>Imagine shaping the future of local search for millions of users worldwide. At Bing Places, you&#39;ll join a team that powers business entity relevance on the search results page. You&#39;ll work on cutting-edge tools and metrics that ensure users find the most accurate and meaningful local results. Our team thrives on innovation, leveraging large and small language models, and advanced measurement systems to deliver exceptional quality.</p>\n<p>As a Data Scientist in Bing Places, you will design new relevance metrics, build labeling pipelines, and fine-tune language models to improve search quality. You&#39;ll work on prompt engineering, implement modern language models techniques like Retrieval Augmented Generation, and create scalable workflows for measurement and evaluation.</p>\n<p>This opportunity will allow you to:</p>\n<ul>\n<li>Accelerate your career growth by working on state-of-the-art AI systems.</li>\n<li>Develop deep expertise in prompt engineering and model tuning.</li>\n<li>Hone your skills in building robust data pipelines and quality frameworks.</li>\n</ul>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement new relevance metrics to measure and improve local search quality.</li>\n<li>Develop and optimize LLM/SLM labeling pipelines for high-throughput, consistent quality judgments.</li>\n<li>Engineer and fine-tune prompts for LLMs to enhance query understanding and classification accuracy.</li>\n<li>Apply modern LLM techniques such as retrieval-augmented generation for improved grounding and relevance.</li>\n<li>Build scalable workflows and dashboards for measurement, evaluation cycles, and quality checks.</li>\n<li>Analyze failure modes and improve prompt rubrics to reduce defect rates and enhance labeling consistency.</li>\n<li>Collaborate with cross-functional teams to integrate metrics and labeling systems into production environments.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND relevant data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master’s Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND relevant data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Bachelor’s Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND relevant data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR equivalent experience.</li>\n<li>Relevant years customer-facing, project-delivery experience, professional services, and/or consulting experience.</li>\n</ul>\n<p>Other Requirements:</p>\n<ul>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b5258a48-495","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-data-and-applied-scientist/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Science","Mathematics","Statistics","Econometrics","Economics","Operations Research","Computer Science","LLM/SLM labeling pipelines","Prompt engineering","Model tuning","Data pipelines","Quality frameworks"],"x-skills-preferred":[],"datePosted":"2026-03-08T22:19:32.251Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Barcelona"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, LLM/SLM labeling pipelines, Prompt engineering, Model tuning, Data pipelines, Quality frameworks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_714d4a02-1c4"},"title":"Applied Scientist","description":"<p>Imagine shaping the future of local search for millions of users worldwide. At Bing Places, you&#39;ll join a team that powers business entity relevance on the search results page. You&#39;ll work on cutting-edge tools and metrics that ensure users find the most accurate and meaningful local results. Our team thrives on innovation, leveraging large and small language models, and advanced measurement systems to deliver exceptional quality.</p>\n<p>As a Applied Scientist in Bing Places, you will design new relevance metrics, build labeling pipelines, and fine-tune language models to improve search quality. You&#39;ll work on prompt engineering, implement modern language models techniques like Retrieval Augmented Generation, and create scalable workflows for measurement and evaluation.</p>\n<p>This opportunity will allow you to:</p>\n<ul>\n<li>Accelerate your career growth by working on state-of-the-art AI systems.</li>\n<li>Develop deep expertise in prompt engineering and model tuning.</li>\n<li>Hone your skills in building robust data pipelines and quality frameworks.</li>\n</ul>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement new relevance metrics to measure and improve local search quality.</li>\n<li>Develop and optimize LLM/SLM labeling pipelines for high-throughput, consistent quality judgments.</li>\n<li>Engineer and fine-tune prompts for LLMs to enhance query understanding and classification accuracy.</li>\n<li>Apply modern LLM techniques such as retrieval-augmented generation for improved grounding and relevance.</li>\n<li>Build scalable workflows and dashboards for measurement, evaluation cycles, and quality checks.</li>\n<li>Analyze failure modes and improve prompt rubrics to reduce defect rates and enhance labeling consistency.</li>\n<li>Collaborate with cross-functional teams to integrate metrics and labeling systems into production environments.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND hands on experience (e.g., statistics, predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND hands on experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field OR equivalent experience.</li>\n</ul>\n<p>Other Requirements:</p>\n<ul>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_714d4a02-1c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/applied-scientist-7/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Statistics","Econometrics","Computer Science","Electrical or Computer Engineering","LLM/SLM labeling pipelines","Prompt engineering","Model tuning","Data pipelines","Quality frameworks"],"x-skills-preferred":["Retrieval Augmented Generation","Scalable workflows","Dashboards","Measurement","Evaluation cycles","Quality checks"],"datePosted":"2026-03-08T22:15:35.607Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Barcelona"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, LLM/SLM labeling pipelines, Prompt engineering, Model tuning, Data pipelines, Quality frameworks, Retrieval Augmented Generation, Scalable workflows, Dashboards, Measurement, Evaluation cycles, Quality checks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_87dc48c7-e55"},"title":"Certification Content & Systems Architect","description":"<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p>About the role:</p>\n<p>Anthropic is seeking a Certification Content and Systems Architect to design and build the educational content and assessment components that power Anthropic&#39;s credentialing efforts. You&#39;ll own the substance of what certifications teach and measure: curriculum architecture, assessment design, competency frameworks, and the AI-native systems that let one educator maintain and evolve certification content across multiple programs simultaneously.</p>\n<p>You&#39;ll work closely with go-to-market and partnerships teams who define what certifications are needed and for whom. You own the educational design: what people need to know, how we assess whether they know it, and how we build those learning and assessment experiences to scale through intelligent tooling rather than growing headcount. The broader program strategy — audience planning, pricing, partner relationships — lives with other teams; you&#39;re the person who makes the education rigorous, current, and scalable.</p>\n<p>We&#39;re looking for someone who thinks about assessment from first principles — someone skeptical of multiple-choice exams as the default and excited about what becomes possible when AI can evaluate demonstrations of competence, generate adaptive assessments, and maintain item banks that evolve with the product. You should be as comfortable building the tooling and workflows around certification as you are designing the educational content itself.</p>\n<p>Responsibilities:</p>\n<ul>\n<li><p>Design certification curriculum architectures, assessment methodologies, and competency frameworks for product and technical knowledge</p>\n</li>\n<li><p>Build AI-augmented assessment systems — item generation, adaptive testing, competency evaluation — that maintain rigor while scaling without proportional human effort</p>\n</li>\n<li><p>Define competency levels, learning progressions, and assessment rubrics for different certification tiers as defined by program stakeholders</p>\n</li>\n<li><p>Create and maintain the educational content that feeds into certification programs, working with the Train-the-Trainer Lead to ensure alignment between training delivery and credentialing requirements</p>\n</li>\n<li><p>Develop and maintain item banks, assessment instruments, and evaluation rubrics — and build AI-assisted processes for keeping these current as Claude&#39;s capabilities evolve</p>\n</li>\n<li><p>Collaborate with GTM, Partnerships, and Customer Success when they identify certification needs — receiving their requirements around audience and business context, then owning the educational design and delivery</p>\n</li>\n<li><p>Establish quality standards for certification assessments including validity, reliability, and fairness</p>\n</li>\n<li><p>Analyze assessment data to continuously improve the learning experience and identify gaps in educational content</p>\n</li>\n<li><p>Build operational tooling for certification administration, verification, and renewal on the educational side</p>\n</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li><p>5+ years in education, assessment design, credentialing, or developer education — with demonstrated experience designing how competence is measured, not just how content is delivered</p>\n</li>\n<li><p>A working practice of using AI tools as core infrastructure in your workflows — you naturally build AI into your processes for content generation, assessment creation, quality checking, and operational tasks</p>\n</li>\n<li><p>Technical fluency with Claude&#39;s products including the API, Claude Code, and Claude.ai — sufficient to design meaningful assessments of real competence with these tools</p>\n</li>\n<li><p>Experience with assessment methodologies beyond multiple-choice: performance-based assessment, portfolio evaluation, adaptive testing, or other approaches that measure applied skill</p>\n</li>\n<li><p>Strong educational design instincts — you think in terms of learning progressions, competency frameworks, and what it actually means to &#39;know&#39; something versus having been exposed to it</p>\n</li>\n<li><p>Comfort operating in ambiguous, fast-moving environments and building from scratch — particularly building systems designed to run lean through intelligent automation</p>\n</li>\n<li><p>Clear, structured communication and the ability to work with non-education stakeholders (sales, partnerships) while maintaining an educator&#39;s standards</p>\n</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li><p>Experience building AI-augmented assessment systems or using LLMs for item generation, evaluation, or adaptive learning</p>\n</li>\n<li><p>Knowledge of psychometrics, credential validation, or assessment quality frameworks</p>\n</li>\n<li><p>Experience with non-traditional methods for tracking and assessing understanding</p>\n</li>\n<li><p>Background in developer education or technical credentialing specifically</p>\n</li>\n<li><p>Familiarity with LMS platforms or certification delivery systems</p>\n</li>\n<li><p>Prior work at a high-growth technology company navigating rapid product change</p>\n</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary:</p>\n<p>$270,000 - $365,000USD</p>\n<p>Logistics</p>\n<p>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to support you through the visa process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_87dc48c7-e55","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5097348008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$270,000 - $365,000USD","x-skills-required":["AI","assessment design","credentialing","developer education","educational content","educational design","item banks","item generation","learning progressions","psychometrics","quality frameworks","technical knowledge"],"x-skills-preferred":["adaptive testing","AI-augmented assessment systems","assessment methodologies","competency frameworks","competency levels","credential validation","evaluation rubrics","item banks","learning management systems","LMS platforms","LLMs","portfolio evaluation","psychometrics","quality frameworks","technical credentialing"],"datePosted":"2026-03-08T13:57:36.153Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, assessment design, credentialing, developer education, educational content, educational design, item banks, item generation, learning progressions, psychometrics, quality frameworks, technical knowledge, adaptive testing, AI-augmented assessment systems, assessment methodologies, competency frameworks, competency levels, credential validation, evaluation rubrics, item banks, learning management systems, LMS platforms, LLMs, portfolio evaluation, psychometrics, quality frameworks, technical credentialing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":270000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1106927f-e3c"},"title":"Data Engineer III","description":"<p>As a Data Engineer III at EADP Gameplay Services, you will plan, build, and deploy enterprise integration and business intelligence solutions to support and enhance matchmaking services. You will serve as a subject matter expert in developing data integration solutions with modern ETL technologies, leveraging advanced tools to store and analyse large-scale data and address complex business challenges.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Analyse and organise raw matchmaking data; model datasets (3NF, dimensional) for analytics and products.</li>\n<li>Build and operate reliable data pipelines (batch and streaming) across multiple sources at petabyte scale.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>6-8 years designing, developing, and managing data systems (warehouses, lakes, and distributed stores) at multi-PB scale.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1106927f-e3c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-III/212231","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data integration","ETL technologies","data analysis"],"x-skills-preferred":["real-time pipelines","data quality frameworks","basic statistical modelling"],"datePosted":"2026-02-17T18:04:01.192Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data integration, ETL technologies, data analysis, real-time pipelines, data quality frameworks, basic statistical modelling"}]}