{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/flux"},"x-facet":{"type":"skill","slug":"flux","display":"Flux","count":18},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_16599c27-a87"},"title":"Senior Infrastructure Engineer/SRE","description":"<p>We&#39;re on a mission to revolutionize the workforce with AI. As a member of the infrastructure team, you&#39;ll design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You&#39;ll partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. Ensure reliability of multi-cloud Kubernetes clusters and pipelines. Implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications. Automate operations and engineering, focusing on automation so we can spend energy where it matters.</p>\n<p>You&#39;ll also build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python, and deep familiarity with container-related security best practices. Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns. Experience with GPU-enabled clusters is a bonus.</p>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>\n<li>Flexible PTO to take the time you need, when you need it</li>\n<li>Paid parental leave for all new parents welcoming a new child</li>\n<li>Retirement savings plan to help you plan for the future</li>\n<li>Remote work setup budget to help you create a productive home office</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced</li>\n<li>In-office meal program and commuter benefits provided for onsite employees</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>\n<p>OTE Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_16599c27-a87","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5137153008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000","x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:52.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1903386-87b"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>\n<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>\n<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>\n<p>Automate operations and engineering.</p>\n<p>Focus on automation so we can spend energy where it matters.</p>\n<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>\n<p>Deep proficiency with coding languages such as Golang or Python.</p>\n<p>Deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>\n<p>Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>\n<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>\n<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>\n<p>Production experience with database software such as PostgreSQL.</p>\n<p>Experience with GitOps tooling such as Flux or Argo.</p>\n<p>Experience with CI/CD such as GitHub Actions.</p>\n<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>\n<p>Compensation includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1903386-87b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4535898008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:57.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Germany (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26212e9e-5a8"},"title":"Infrastructure Engineer/SRE","description":"<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>\n<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>\n<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>\n<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>\n<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>\n<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>\n</ul>\n<p>What we are looking for:</p>\n<ul>\n<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>\n<li>Deep proficiency with coding languages such as Golang or Python.</li>\n<li>Deep familiarity with container-related security best practices.</li>\n<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>\n<li>Experience with GPU-enabled clusters is a bonus.</li>\n<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>\n<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>\n<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>\n<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>\n<li>Production experience with database software such as PostgreSQL.</li>\n<li>Experience with GitOps tooling such as Flux or Argo.</li>\n<li>Experience with CI/CD such as GitHub Actions.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>\n<li>Flexible vacation time to promote a healthy work-life blend.</li>\n<li>Paid parental leave to support you and your family.</li>\n</ul>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26212e9e-5a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5113847008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:55.875Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Australia (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ac95264-313"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>We&#39;re looking for a Staff Infrastructure Software Engineer (Kubernetes) to join our engineering team. As a member of the infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. You will ensure the reliability of multi-cloud Kubernetes clusters and pipelines. You will also implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>You will focus on automation so we can spend energy where it matters. You will build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python. You should also have deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns, is required. Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize, and production experience working with IAC tools such as Terraform or CloudFormation, is a plus.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS, and production experience with other cloud providers such as Google Cloud and Azure, is a bonus.</p>\n<p>Experience with GitOps tooling such as Flux or Argo, and experience with CI/CD such as GitHub Actions, is a plus.</p>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ac95264-313","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4802840008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","container-related security best practices","cert-manager","external-dns","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:47.350Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Romania (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, container-related security best practices, cert-manager, external-dns, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a438f945-411"},"title":"Senior Site Reliability Engineer (Resilience) - Platform Resilience","description":"<p>We&#39;re seeking a Senior Site Reliability Engineer (SRE) to join our Platform Engineering department. As an SRE, you will lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure. You will grow our global Platform infrastructure to meet increasing scaling demands by developing and maintaining software, tooling, and automations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop and maintain software, tooling, and automations to ensure the reliability and scalability of our global infrastructure.</li>\n</ul>\n<ul>\n<li>Lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure.</li>\n</ul>\n<ul>\n<li>Collaborate with engineers to identify, implement, and deliver solutions that meet the needs of our customers.</li>\n</ul>\n<ul>\n<li>Champion an environment focused on collaboration, operational excellence, and uplifting others.</li>\n</ul>\n<ul>\n<li>Respond to and prevent repeated customer impact in response to major incidents and prioritized problem management.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>\n</ul>\n<ul>\n<li>Background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions.</li>\n</ul>\n<ul>\n<li>Experience in public cloud and managed Kubernetes services is advantageous.</li>\n</ul>\n<ul>\n<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Operated a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>\n</ul>\n<ul>\n<li>Built or operated a Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>\n</ul>\n<ul>\n<li>Written non-trivial programs in Golang or other programming languages.</li>\n</ul>\n<ul>\n<li>Worked with containerized services (such as Docker).</li>\n</ul>\n<ul>\n<li>Proven experience in leading and improving alerting and major incident management standard processes metrics systems (e.g. Elastic Stack, Graphite, Prometheus, Influx) to diagnose issues and quantify impacts to present to others at varying levels of the organization.</li>\n</ul>\n<ul>\n<li>Experienced in system administration with professional skills in Linux on distributed systems at scale.</li>\n</ul>\n<ul>\n<li>Diagnosed or designed, implemented, and created solutions with the Elastic Stack.</li>\n</ul>\n<ul>\n<li>Thrived in a self-organizing and sharing in a globally distributed team environment.</li>\n</ul>\n<ul>\n<li>Strengthened team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>\n</ul>\n<p>Compensation:</p>\n<ul>\n<li>This role is eligible to participate in Elastic&#39;s stock program.</li>\n</ul>\n<ul>\n<li>Total rewards package includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</li>\n</ul>\n<ul>\n<li>Typical starting salary range for this role is $154,800-$195,600 USD.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a438f945-411","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7794016","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$154,800-$195,600 USD","x-skills-required":["Software engineering","Public cloud","Managed Kubernetes services","Infrastructure-as-Code tooling","Containerized services","System administration","Linux on distributed systems"],"x-skills-preferred":["Golang","Crossplane","Terraform","Docker","Elastic Stack","Graphite","Prometheus","Influx"],"datePosted":"2026-04-18T15:53:14.287Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software engineering, Public cloud, Managed Kubernetes services, Infrastructure-as-Code tooling, Containerized services, System administration, Linux on distributed systems, Golang, Crossplane, Terraform, Docker, Elastic Stack, Graphite, Prometheus, Influx","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":154800,"maxValue":195600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4de3b9d6-319"},"title":"Senior Manager, Technical Revenue Accounting","description":"<p>We are seeking a Senior Manager, Technical Revenue Accounting to join our Accounting Team. In this role, you will execute complex ASC 606 technical evaluations, draft comprehensive revenue accounting memos, and provide strategic accounting guidance on new product revenue recognition treatments and go-to-market initiatives.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Author detailed ASC 606 technical accounting memos analysing complex revenue recognition scenarios and providing accounting conclusions</li>\n<li>Evaluate and establish revenue recognition treatment for new product offerings, including performance obligation identification, SSP evaluation, and contract modification assessments</li>\n<li>Advise business stakeholders on revenue accounting implications of new go-to-market strategies, pricing models, and unique commercial arrangements</li>\n<li>Review revenue contracts and prepare technical accounting documentation, including ASC 606 checklists and position papers to ensure revenue recognition compliance</li>\n<li>Lead technical research initiatives on emerging revenue recognition issues and present authoritative findings to senior management</li>\n<li>Develop and maintain comprehensive revenue recognition policies, procedures, and technical guidance documentation</li>\n<li>Translate complex technical accounting requirements into actionable business guidance for non-accounting stakeholders</li>\n<li>Prepare for revenue-related month end close activities and flux analysis</li>\n<li>Build and maintain relationships with cross-functional stakeholders to drive effective collaboration</li>\n<li>Participate in and contribute to process improvement and system implementation projects</li>\n</ul>\n<p>You may be a good fit if you have 7+ years of progressive accounting experience, with extensive expertise in ASC 606 implementation and complex revenue recognition scenarios.</p>\n<p>Strong candidates may have a Bachelor&#39;s degree in Accounting or Finance, CPA certification, and experience with technical accounting advisory work, including revenue recognition consulting or implementation experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4de3b9d6-319","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5077106008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000-$230,000 USD","x-skills-required":["ASC 606","Technical Accounting","Revenue Recognition","Accounting Memos","Contract Modification Assessments","Performance Obligation Identification","SSP Evaluation","Revenue Contracts","Technical Accounting Documentation","ASC 606 Checklists","Position Papers","Revenue Recognition Compliance","Month End Close Activities","Flux Analysis","Cross-Functional Stakeholders","Process Improvement","System Implementation Projects"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:54.715Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"ASC 606, Technical Accounting, Revenue Recognition, Accounting Memos, Contract Modification Assessments, Performance Obligation Identification, SSP Evaluation, Revenue Contracts, Technical Accounting Documentation, ASC 606 Checklists, Position Papers, Revenue Recognition Compliance, Month End Close Activities, Flux Analysis, Cross-Functional Stakeholders, Process Improvement, System Implementation Projects","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":230000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3db3eb4d-3dc"},"title":"Senior Accounting Manager","description":"<p>Join Airtable as a Senior Accounting Manager and play a pivotal role in transforming our accounting operations through automation and AI-driven solutions.</p>\n<p>You will own key components of our monthly, quarterly, and year-end financial reporting processes, drive continuous improvement, and help us achieve a faster, more efficient close.</p>\n<p>This is a unique opportunity to make a significant impact at a high-growth technology company, shaping the future of our finance function while developing your career in a dynamic, collaborative environment.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Owning and managing key aspects of the monthly, quarterly, and year-end close process, including review of journal entries and balance sheet reconciliations</li>\n</ul>\n<ul>\n<li>Leveraging AI and automation to improve the efficiency of the month-end close process, driving a sustainable 5-day close by automating accruals, flux analysis, and reconciliations</li>\n</ul>\n<ul>\n<li>Driving continuous improvement across accounting processes, supporting SOX compliance and other cross-functional finance and systems initiatives</li>\n</ul>\n<ul>\n<li>Supporting the annual financial audit, ensuring readiness through strong documentation and timely responses to audit requests</li>\n</ul>\n<ul>\n<li>Measuring and reporting on efficiency gains from AI-enabled workflows, including hours saved in tasks such as flux analysis, reconciliations, and journal entry preparation</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>7+ years of progressive accounting experience, ideally with a mix of public accounting and industry experience in a high-growth or technology-enabled company</li>\n</ul>\n<ul>\n<li>Experienced in managing the month-end close and financial reporting processes, with exposure to audits, internal controls, and cross-functional collaboration</li>\n</ul>\n<ul>\n<li>Strong knowledge of US GAAP, financial reporting, and month-end close processes, including journal entries, reconciliations, and flux analysis</li>\n</ul>\n<ul>\n<li>Background in process automation, systems implementations, or leveraging AI/data tools to improve accounting workflows</li>\n</ul>\n<ul>\n<li>Experience supporting external audits and internal control environments, including SOX readiness and documentation</li>\n</ul>\n<ul>\n<li>Ability to lead and develop team members while managing multiple priorities in a fast-paced environment</li>\n</ul>\n<ul>\n<li>Strong systems and process improvement mindset, with a track record of implementing automation or improving accounting workflows</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3db3eb4d-3dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8460788002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["US GAAP","Financial reporting","Month-end close processes","Journal entries","Reconciliations","Flux analysis","Process automation","Systems implementations","AI/data tools","SOX compliance"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:15.685Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"US GAAP, Financial reporting, Month-end close processes, Journal entries, Reconciliations, Flux analysis, Process automation, Systems implementations, AI/data tools, SOX compliance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04884ef5-f9e"},"title":"Software Engineer, Compute (8+ YOE)","description":"<p>We&#39;re looking for an experienced software engineer to help lead the next phase of platform maturity in how we run Kubernetes at Airtable. As a member of the Compute Platform team, you&#39;ll be responsible for building and evolving the infrastructure that powers Airtable&#39;s services at scale.</p>\n<p>Your primary focus will be on designing, implementing, and scaling core Kubernetes platform capabilities used across ~70 clusters, spread across multiple environments. You&#39;ll also lead foundational modernization efforts, such as migrating to a new CNI plugin to overhaul IP security rule management across clusters and regions.</p>\n<p>In addition to your technical expertise, you&#39;ll collaborate closely with product and security teams to power a rapidly growing enterprise business. You&#39;ll spend roughly 70% of your time in hands-on engineering and 30% in design reviews, mentorship, and cross-team collaboration.</p>\n<p>To succeed in this role, you&#39;ll need 8+ years of software engineering experience, with deep expertise building and operating a Kubernetes-based internal service platform. You&#39;ll also need a strong understanding of Kubernetes internals, including controllers/operators, CRDs, networking, and cluster architecture.</p>\n<p>If you&#39;re excited about building internal platforms, shaping infrastructure strategy, and partnering closely with product and security teams, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04884ef5-f9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8442397002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Kubernetes","Typescript","Golang","Cloud Native Infrastructure","CI/CD","Infrastructure as Code","Terraform","CloudFormation","OpenTofu","Pulumi"],"x-skills-preferred":["AWS infrastructure","EKS","Spinnaker","ArgoCD","Flux","Jenkins"],"datePosted":"2026-04-18T15:39:46.997Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY; Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Typescript, Golang, Cloud Native Infrastructure, CI/CD, Infrastructure as Code, Terraform, CloudFormation, OpenTofu, Pulumi, AWS infrastructure, EKS, Spinnaker, ArgoCD, Flux, Jenkins"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a632e52b-c63"},"title":"Site Reliability Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>\n<p>Role Summary</p>\n<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our platform and customer facing applications. You will work closely with our software engineers and research teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>\n<p>Responsibilities</p>\n<p>As a Site Reliability Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.</p>\n<p>Operations</p>\n<p>• Design, build, and maintain scalable, highly available and fault-tolerant infrastructures to support our web services and ML workloads</p>\n<p>• Make sure our platform, inference and model training environments are always highly available and enable seamless replication of work environments across several HPC clusters</p>\n<p>• Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)</p>\n<p>• Implement and improve monitoring, alerting, and incident response systems to ensure optimal system performance and minimize downtime</p>\n<p>• Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our client-facing APIs and large training runs</p>\n<p>• Participate occasionally in on-call rotations to respond to incidents and perform root cause analysis to prevent future occurrences</p>\n<p>Development</p>\n<p>• Drive continuous improvement in infrastructure automation, deployment, and orchestration using tools like Kubernetes, Flux, Terraform</p>\n<p>• Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments</p>\n<p>• Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure</p>\n<p>• Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)</p>\n<p>• Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</p>\n<p>• Document processes and procedures to ensure consistency and knowledge sharing across the team</p>\n<p>• Contribute to open-source projects, research publications, blog articles and conferences</p>\n<p>About You</p>\n<p>• Master’s degree in Computer Science, Engineering or a related field</p>\n<p>• 7+ years of experience in a DevOps/SRE role</p>\n<p>• Strong experience with cloud computing and highly available distributed systems</p>\n<p>• Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...)</p>\n<p>• Experience working against reliability KPIs (observability, alerting, SLAs)</p>\n<p>• Hands-on experience with CI/CD, containerization and orchestration tools (Docker, Kubernetes...)</p>\n<p>• Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK Stack, Datadog...)</p>\n<p>• Familiarity with infrastructure-as-code tools like Terraform or CloudFormation</p>\n<p>• Proficiency in scripting languages (Python, Go, Bash...) and knowledge of software development best practices</p>\n<p>• Strong understanding of networking, security, and system administration concepts</p>\n<p>• Excellent problem-solving and communication skills</p>\n<p>• Self-motivated and able to work well in a fast-paced startup environment</p>\n<p>Your Application Will Be All The More Interesting If You Also Have:</p>\n<p>• Experience in an AI/ML environment</p>\n<p>• Experience of high-performance computing (HPC) systems and workload managers (Slurm)</p>\n<p>• Worked with modern AI-oriented solutions (Fluidstack, Coreweave, Vast...)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a632e52b-c63","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/6e16e4fa-a60b-4270-a815-06b0450fb597","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud computing","highly available distributed systems","DevOps","SRE","Kubernetes","Flux","Terraform","CI/CD","containerization","orchestration","monitoring","logging","alerting","observability","infrastructure-as-code","scripting languages","software development best practices","networking","security","system administration"],"x-skills-preferred":["AI/ML environment","high-performance computing (HPC) systems","workload managers","modern AI-oriented solutions"],"datePosted":"2026-04-17T12:47:37.519Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing, highly available distributed systems, DevOps, SRE, Kubernetes, Flux, Terraform, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing (HPC) systems, workload managers, modern AI-oriented solutions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45d91300-060"},"title":"Developer Relations Engineer","description":"<p>We&#39;re looking for a Developer Relations Engineer to join our team in San Francisco. As a key member of our DevRel, Growth and Marketing team, you will be responsible for creating content, documentation, and community engagement to help developers discover, learn, and succeed with FLUX.</p>\n<p>Your primary focus will be on improving our video presence, creating world-class developer documentation, writing tutorials, and engaging directly with the developer community. You will work closely with our research team to shape how developers discover and learn about FLUX.</p>\n<p>In this role, you will have the opportunity to build our local presence in the SF developer community and grow our reach online. You will be responsible for running hackathons, creating video tutorials, and maintaining open-source demo repositories.</p>\n<p>We are looking for someone with 3+ years of experience in developer relations or a technical role with a public-facing component. You should have a proven track record of technical content creation, up-to-date engineering skills, and working knowledge of AI/ML development.</p>\n<p>Nice to have: experience with Generative Media, finetuning/LORA of FLUX/Diffusion models.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45d91300-060","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Black Forest Labs","sameAs":"https://www.blackforestlabs.com/","logo":"https://logos.yubhub.co/blackforestlabs.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/blackforestlabs/jobs/5125852008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$175K-$290K + equity","x-skills-required":["Developer Relations","Technical Content Creation","AI/ML Development","Engineering Skills","Community Engagement"],"x-skills-preferred":["Generative Media","Finetuning/LORA of FLUX/Diffusion models"],"datePosted":"2026-04-17T12:24:38.632Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco (USA)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Developer Relations, Technical Content Creation, AI/ML Development, Engineering Skills, Community Engagement, Generative Media, Finetuning/LORA of FLUX/Diffusion models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_353dc38a-b8c"},"title":"Software Engineering, Staff Engineer (Network SRE)","description":"<p>Join us to transform the future through continuous technological innovation.</p>\n<p>You are a passionate and experienced network professional with a proven track record of driving reliability, automation, and operational excellence in large-scale environments. You thrive in dynamic, collaborative settings and bring a data-driven mindset to every challenge.</p>\n<p>Your expertise spans both campus and data center networks, and you&#39;re adept at integrating cutting-edge SRE principles with robust network engineering practices. You are comfortable navigating complex, multi-vendor environments and have a deep familiarity with cloud networking in AWS, Azure, or GCP.</p>\n<p>You continually seek opportunities to automate, optimize, and innovate, leveraging tools like Python, Ansible, and AI-driven platforms to reduce manual toil and enhance system resilience. You are an advocate for observability and performance metrics, and you are skilled at implementing monitoring solutions that deliver actionable insights.</p>\n<p>Your problem-solving abilities are matched by your commitment to continuous improvement, and you excel at leading incident investigations and root-cause analyses. You value diversity and inclusion, recognizing that the best solutions come from a variety of perspectives and experiences.</p>\n<p>Your communication skills enable you to build strong partnerships across cross-functional teams, and you are eager to mentor others and share your knowledge. Above all, you are motivated by the opportunity to make a meaningful impact,both within Synopsys and across the broader technology landscape.</p>\n<p>Champion automation initiatives that significantly reduce operational toil, enhance reliability, and boost efficiency at scale.</p>\n<p>Own and evolve the observability strategy,driving improvements in monitoring, alerting, logging, and telemetry across multiple teams.</p>\n<p>Identify and implement operational improvements, partnering with teams to ensure scalable, sustainable excellence.</p>\n<p>Design and build automated operations and maintenance platforms to minimize manual intervention and maximize system performance and resiliency.</p>\n<p>Apply deep technical judgment to proactively prevent incidents and lead complex production investigations and root-cause analyses.</p>\n<p>Measure and optimize system performance, anticipating customer needs and innovating to continuously improve network capabilities.</p>\n<p>Collaborate with cross-functional stakeholders to deliver impactful enhancements to network reliability and scalability.</p>\n<p>Leverage AI tools and technologies for process automation and workflow optimization.</p>\n<p>Drive Synopsys&#39; network reliability to new heights, ensuring seamless connectivity for global operations and customers.</p>\n<p>Enable rapid incident response and recovery, minimizing downtime and safeguarding mission-critical services.</p>\n<p>Advance automation and infrastructure-as-code practices, setting new standards for efficiency and operational excellence.</p>\n<p>Elevate the observability and performance monitoring capabilities across the network stack, empowering data-driven decision-making.</p>\n<p>Foster cross-team collaboration, sharing best practices and mentoring peers to build a culture of reliability and innovation.</p>\n<p>Contribute to the continuous evolution of Synopsys&#39; network architecture, supporting future growth and technological advancement.</p>\n<p>Shape the adoption of AI-assisted workflows and advanced analytics for proactive network management.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_353dc38a-b8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/hyderabad/software-engineering-staff-engineer-network-sre/44408/92676359872","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Ansible","AI-driven platforms","Cloud networking in AWS, Azure, or GCP","Campus and data center networks","Multi-vendor environments","Observability and performance metrics","Monitoring solutions","Incident investigations and root-cause analyses","Diversity and inclusion","Communication skills","Mentoring and knowledge sharing","Automation and infrastructure-as-code practices","ServiceNow","Jira","Linux system fundamentals","Git","Flux","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-05T13:24:39.146Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Ansible, AI-driven platforms, Cloud networking in AWS, Azure, or GCP, Campus and data center networks, Multi-vendor environments, Observability and performance metrics, Monitoring solutions, Incident investigations and root-cause analyses, Diversity and inclusion, Communication skills, Mentoring and knowledge sharing, Automation and infrastructure-as-code practices, ServiceNow, Jira, Linux system fundamentals, Git, Flux, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b6cbb9f9-1e3"},"title":"Sr. AI Engineer","description":"<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.\\n\\nYou Are:\\nAn experienced AI engineer skilled in solving complex problems and building scalable solutions. You boost productivity across teams with strong technical knowledge of Linux, Python, SQL, Machine learning, generative AI and tools like Github, Grafana, Kibana, Pycharm, and VS Code. You excel at collaborating with cross-functional teams to develop agentic workflows. Your analytical mindset allows you to identify bottlenecks, optimize product operations and implement agentic workflows for autonomous regression analysis. You are committed to continuous improvement, leveraging data insights to enhance system performance and reliability.\\n\\nWhat You’ll Be Doing:\\n- Develop agentic workflows to enable autonomous tasks such as regression analysis and system monitoring.\\n- Design, develop, troubleshoot, and debug software programs for enhancements and new product initiatives.\\n- Re-design and develop existing applications to support scalable R&amp;D productivity solutions and cloud readiness across Synopsys teams.\\n- Troubleshoot, debug, and provide ongoing support for software tools used in emulation labs and internal hardware systems.\\n- Develop and maintain software tools for efficient scheduling of jobs on internal hardware platforms.\\n- Scale emulation lab operations through process improvements and advanced tooling, optimizing performance and reliability.\\n- Run benchmark test cases, analyze test results data, and identify spurious or bottleneck test cases to enhance system efficiency.\\n\\nThe Impact You Will Have:\\n- Accelerate R&amp;D productivity by delivering scalable software solutions and automation tools across all Synopsys groups.\\n- Enhance the efficiency of emulation lab operations, enabling faster innovation cycles and improved testing outcomes.\\n- Reduce operational bottlenecks through insightful data analysis and targeted process improvements.\\n- Empower IT and engineering teams with automated workflows, freeing up resources for strategic initiatives.\\n- Contribute to the reliability and robustness of internal hardware systems, supporting the development of industry-leading silicon solutions.\\n- Support the continuous evolution of software tools, maintaining Synopsys’ leadership in chip design and verification technology.\\n\\nWhat You’ll Need:\\nThis position requires access to or use of information which is subject to export restrictions, including the International Traffic in Arms Regulations (ITAR). All applicants for this position must be &quot;U.S. Persons&quot; within the meaning of the ITAR. &quot;U.S. Persons&quot; include U.S. Citizens, U.S. Lawful Permanent Residents (i.e. &#39;Green Card Holders&#39;), Political Asylees, Refugees or other protected individuals as defined by 8 U.S.C. 1324b(a)(3).\\n- Requires 8+ years of related work experience plus master’s degree or equivalent.\\n- Expertise in machine learning, developing agentic workflows.\\n- Strong proficiency in Linux environments and production deployment processes.\\n- Advanced skills in Python, SQL, and Bash scripting for software development and automation.\\n- Expertise with ElasticSearch, LSF, Unix, Influx DB, and related technologies.\\n- Familiarity with Github, Grafana, PostgreSQL, Kibana, Pycharm, and VS Code.\\n\\nWho You Are:\\n- Innovative thinker with a proactive approach to problem solving.\\n- Collaborative team player with strong communication and interpersonal skills.\\n- Analytical and detail-oriented, able to interpret complex data and drive actionable insights.\\n- Adaptable and open to learning new technologies and methodologies.\\n- Committed to delivering high-quality results in fast-paced, dynamic environments.\\n- Inclusive and supportive, fostering an environment where diverse perspectives are valued.\\n\\nThe Team You’ll Be A Part Of:\\nYou’ll join a high-impact R&amp;D engineering team focused on developing and enhancing software tools that drive Synopsys’ productivity and innovation. The team collaborates closely with IT, hardware, and cloud specialists to scale operations, automate workflows, and ensure seamless integration across all groups. Together, you’ll contribute to the future of silicon design, verification, and emulation, propelling Synopsys’ leadership in the industry.\\n\\nRewards and Benefits:\\nWe offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.\\n\\nAt Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.\\n\\nIn addition to the base salary, this role may be eligible for an annual bonus, equity, and other discretionary bonuses. Synopsys offers comprehensive health, wellness, and financial benefits as part of a competitive total rewards package. The actual compensation offered will be based on a number of job-related factors, including location, skills, experience, and education. Your recruiter can share more specific details on the total rewards package upon request. The base salary range for this role is across the U.S.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b6cbb9f9-1e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/sunnyvale/sr-ai-engineer/44408/91781667120","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165000-$248000","x-skills-required":["machine learning","generative AI","Linux","Python","SQL","Github","Grafana","Kibana","Pycharm","VS Code","ElasticSearch","LSF","Unix","Influx DB"],"x-skills-preferred":[],"datePosted":"2026-04-05T13:20:47.344Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, generative AI, Linux, Python, SQL, Github, Grafana, Kibana, Pycharm, VS Code, ElasticSearch, LSF, Unix, Influx DB","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":248000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_419c1058-a0b"},"title":"Site Reliability Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>\n<p>Role Summary</p>\n<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our platform and customer facing applications. You will work closely with our software engineers and research teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>\n<p>Responsibilities</p>\n<p>As a Site Reliability Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.</p>\n<p>Operations (50%)</p>\n<ul>\n<li>Design, build, and maintain scalable, highly available and fault-tolerant infrastructures to support our web services and ML workloads</li>\n<li>Make sure our platform, inference and model training environments are always highly available and enable seamless replication of work environments across several HPC clusters</li>\n<li>Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)</li>\n<li>Implement and improve monitoring, alerting, and incident response systems to ensure optimal system performance and minimize downtime</li>\n<li>Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our client-facing APIs and large training runs</li>\n<li>Participate occasionally in on-call rotations to respond to incidents and perform root cause analysis to prevent future occurrences</li>\n</ul>\n<p>Development (50%)</p>\n<ul>\n<li>Drive continuous improvement in infrastructure automation, deployment, and orchestration using tools like Kubernetes, Flux, Terraform</li>\n<li>Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments</li>\n<li>Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure</li>\n<li>Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)</li>\n<li>Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</li>\n<li>Document processes and procedures to ensure consistency and knowledge sharing across the team</li>\n<li>Contribute to open-source projects, research publications, blog articles and conferences</li>\n</ul>\n<p>About You</p>\n<ul>\n<li>Master’s degree in Computer Science, Engineering or a related field</li>\n<li>7+ years of experience in a DevOps/SRE role</li>\n<li>Strong experience with cloud computing and highly available distributed systems</li>\n<li>Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...) </li>\n<li>Experience working against reliability KPIs (observability, alerting, SLAs)</li>\n<li>Hands-on experience with CI/CD, containerization and orchestration tools (Docker, Kubernetes...)</li>\n<li>Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK Stack, Datadog...)</li>\n<li>Familiarity with infrastructure-as-code tools like Terraform or CloudFormation</li>\n<li>Proficiency in scripting languages (Python, Go, Bash...) and knowledge of software development best practices</li>\n<li>Strong understanding of networking, security, and system administration concepts</li>\n<li>Excellent problem-solving and communication skills</li>\n<li>Self-motivated and able to work well in a fast-paced startup environment</li>\n</ul>\n<p>Your Application Will Be All The More Interesting If You Also Have:</p>\n<ul>\n<li>Experience in an AI/ML environment</li>\n<li>Experience of high-performance computing (HPC) systems and workload managers (Slurm)</li>\n<li>Worked with modern AI-oriented solutions (Fluidstack, Coreweave, Vast...)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_419c1058-a0b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai/careers"},"x-apply-url":"https://jobs.lever.co/mistral/6e16e4fa-a60b-4270-a815-06b0450fb597","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud computing","highly available distributed systems","DevOps","SRE","Kubernetes","Flux","Terraform","CI/CD","containerization","orchestration","monitoring","logging","alerting","observability","infrastructure-as-code","scripting languages","software development best practices","networking","security","system administration"],"x-skills-preferred":["AI/ML environment","high-performance computing","workload managers","modern AI-oriented solutions"],"datePosted":"2026-03-10T11:32:04.928Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing, highly available distributed systems, DevOps, SRE, Kubernetes, Flux, Terraform, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing, workload managers, modern AI-oriented solutions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ee2fcbdc-fc4"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will act as a senior technical leader in complex data and analytics engagements, shaping and governing end-to-end enterprise data architectures, leading technical teams, and serving as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will be responsible for ensuring that enterprise data and analytics solutions are scalable, secure, and production-ready, while translating business requirements into robust technical designs and delivery roadmaps.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ee2fcbdc-fc4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/uuSzzCt8qNbo6UpEFkSyjY/hybrid-principal-consultant---data-architecture-in-london-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","Azure, AWS or GCP","Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric","Postgres, SQL Server, Oracle","Cosmos DB, MongoDB, InfluxDB","API-based and event-driven architectures","Docker / Kubernetes"],"x-skills-preferred":["Advanced analytics, AI / ML or GenAI","Streaming platforms (e.g. Kafka, Azure Event Hubs)","Data governance or metadata tools","Cloud, data, or architecture certifications"],"datePosted":"2026-03-09T16:52:06.783Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, Azure, AWS or GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, Postgres, SQL Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Docker / Kubernetes, Advanced analytics, AI / ML or GenAI, Streaming platforms (e.g. Kafka, Azure Event Hubs), Data governance or metadata tools, Cloud, data, or architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56dc9a51-e66"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56dc9a51-e66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["enterprise data architecture","system data integration","data engineering","analytics","modern data architectures","Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","cloud data platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","SQL","relational databases","Postgres","SQL Server","Oracle","NoSQL databases","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","data migration programmes","data pipelines","orchestration","automation","CI/CD concepts","production-grade deployments","distributed systems","Docker","Kubernetes","data management and governance principles","data quality","metadata","lineage","master data management","data management software and tools","security","access control","compliance considerations","Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience"],"x-skills-preferred":["advanced analytics","AI / ML or GenAI","streaming platforms","Kafka","Azure Event Hubs","data governance or metadata tools","cloud","data","architecture certifications"],"datePosted":"2026-03-09T16:51:22.857Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fbb19758-f83"},"title":"Principal Consultant Data Architecture (m/w/d)","description":"<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most significant challenges of our clients? We are growing further and seeking engaged individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>\n<p>Our dynamic organisation allows you to work across themes and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>\n<p>As a Principal Consultant Data Architecture, you will be the technical leader in complex data and analytics projects. You will design and be responsible for comprehensive enterprise data architectures, lead technical teams, and be a trusted technical advisor for customers and internal stakeholders.</p>\n<p>You will ensure that enterprise data and analytics solutions are scalable, secure, and operational, translate technical requirements into robust technical images, and plan the introduction.</p>\n<p><strong>Your Tasks:</strong></p>\n<ul>\n<li>Definition and governance of target architectures for enterprise data, integration, and analytics in cloud and hybrid environments</li>\n<li>Translation of business goals into scalable, secure, and compliant architectures</li>\n<li>Leadership of the conception of comprehensive end-to-end data solutions (data intake, data integration, storage, security, processing, analytics, AI support)</li>\n<li>Steering and accompanying delivery teams during implementation, rollout, and establishment of operational readiness</li>\n<li>Senior technical contact person for architects, IT managers, and technical teams of customers</li>\n<li>Mentoring of system and data architects as well as programmers</li>\n<li>Participation in the further development of best practices and reference architectures</li>\n<li>Support of presales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>What You Bring - Minimum Requirements</strong></p>\n<p><strong>Experience &amp; Seniority</strong></p>\n<ul>\n<li>At least 5 years of relevant professional experience in enterprise data architecture, data integration, data engineering, or analytics</li>\n<li>Experience in leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong customer and advisory experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>In-depth expertise in modern data architectures, particularly:</li>\n</ul>\n<ol>\n<li>Data Mesh / Data Fabric / Data Lake / Data Warehouse Architectures</li>\n<li>Principles of modern data architecture designs</li>\n<li>Integration patterns for batch and streaming data</li>\n<li>Data platform, DevOps, deployment, and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n</ol>\n<ul>\n<li>Practical experience with cloud data platforms, such as:</li>\n</ul>\n<ol>\n<li>Azure, AWS, or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n</ol>\n<ul>\n<li>Very good SQL knowledge as well as experience with relational databases (e.g. PostgreSQL, SQL-Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Good understanding of API-based and event-driven architectures</li>\n<li>Experience in conceiving and steering enterprise data migration programs (including mapping, transformation rules, data quality measures, etc.)</li>\n</ul>\n<p><strong>Engineering &amp; Platform Fundamentals</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Knowledge of CI/CD concepts and production-ready deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes knowledge is an advantage</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Very good understanding of data management and governance principles, particularly:</li>\n</ul>\n<ol>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data management software and tools</li>\n<li>Security, access, and compliance requirements</li>\n</ol>\n<ul>\n<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with advanced analytics, AI/ML, or GenAI from an architect&#39;s perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Practical experience with data governance or metadata tools</li>\n<li>Cloud or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility (Germany)</strong></p>\n<ul>\n<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>\n<li>Very good English skills</li>\n<li>Project-related travel readiness</li>\n</ul>\n<p><strong>About Your Team</strong></p>\n<p>You will become part of our growing data and analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of data and analytics strategy, data management and governance, data platforms and engineering, as well as analytics and data science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>You will become an employee of a globally renowned management consulting firm that is at the forefront of industry disruption. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>\n<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>\n<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is one of the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five years in a row.</p>\n<p>We offer a market-leading remuneration, attractive additional benefits, as well as excellent further education and development opportunities. Have you become curious? Then we look forward to your application</p>\n<p>More about Infosys Consulting - Europe</p>\n<p><strong>Visit website</strong></p>\n<p>Where Innovation meets Excellence.</p>\n<p>Infosys Consulting is a globally renowned management consulting firm that is on the front-line of industry disruption. We are a mid-size player with a supportive, entrepreneurial spirit that works with a market-leading brand in every sector, while our parent organization Infosys is a top-5 powerhouse IT brand that is outperforming the market and experiencing rapid growth.</p>\n<p>Our consulting business is annually recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths we offer to our consultants. We are committed to fostering an inclusive work culture that inspires everyone to deliver their best.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fbb19758-f83","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/sve4gTuNFLf3RtEjhQMzHp/remote-principal-consultant-data-architecture-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Mesh","Data Fabric","Data Lake","Data Warehouse Architectures","Principles of modern data architecture designs","Integration patterns for batch and streaming data","Data platform, DevOps, deployment, and security architectures","Analytics and AI enablement architectures","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","PostgreSQL","SQL-Server","Oracle","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","Enterprise data migration programs"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:50:38.864Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Mesh, Data Fabric, Data Lake, Data Warehouse Architectures, Principles of modern data architecture designs, Integration patterns for batch and streaming data, Data platform, DevOps, deployment, and security architectures, Analytics and AI enablement architectures, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, PostgreSQL, SQL-Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Enterprise data migration programs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d977c57-e86"},"title":"Frontend SE II","description":"<p>You will work on designing and developing product features for 820M monthly active users. This role involves taking ownership of product features and ensuring their quality. You will write clean code with proper test coverage, review others&#39; code, and mentor junior team members. You will also build reusable modules and libraries, optimize applications for speed and scalability, and ensure technical feasibility of UI/UX designs. Additionally, you will identify and correct bottlenecks and fix bugs.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop product features that are delivered to 820M monthly active users</li>\n<li>Take ownership of the product features and be responsible for its quality</li>\n<li>Write clean code with proper test coverage</li>\n<li>Review others&#39; code and ensure that it is up to organisation standards</li>\n<li>Mentor junior members of the team</li>\n<li>Build reusable modules and libraries for future use</li>\n<li>Optimise application for maximum speed and scalability</li>\n<li>Ensure the technical feasibility of UI/UX designs</li>\n<li>Identify and correct bottlenecks and fix bugs</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience in CSS frameworks like Sass, Tailwind, and Redux</li>\n<li>Experience in working with large frontend applications</li>\n<li>Knowledge of backend development and tools</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of experience in writing client-side JavaScript, developing medium to large scale client side applications</li>\n<li>Proficient understanding of modern web tech stack including HTML5, CSS3, and ES6</li>\n<li>Strong understanding of ReactJS and Flux architecture</li>\n<li>Familiarity with build tools like Webpack, Babel, and Gulp</li>\n<li>Proficient understanding of cross-browser compatibility issues and ways to work around them</li>\n<li>Knowledge of frontend optimisation techniques and tools (eg. Lighthouse)</li>\n<li>Proficient with Git</li>\n<li>Experience in writing unit and integration tests</li>\n<li>Excellent problem-solving skills and a proactive approach to issue resolution</li>\n<li>Excellent verbal and written communication skills</li>\n<li>Bachelor’s degree in Computer Science (or equivalent)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d977c57-e86","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Helpshift","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/B96C1B28F1","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["client-side JavaScript","HTML5","CSS3","ES6","ReactJS","Flux architecture","Webpack","Babel","Gulp","Git","unit and integration tests"],"x-skills-preferred":["Sass","Tailwind","Redux","backend development and tools"],"datePosted":"2026-03-09T10:53:28.284Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"client-side JavaScript, HTML5, CSS3, ES6, ReactJS, Flux architecture, Webpack, Babel, Gulp, Git, unit and integration tests, Sass, Tailwind, Redux, backend development and tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f0ab10d6-81a"},"title":"Revenue Accounting Manager","description":"<p><strong>Compensation</strong></p>\n<p>We offer a competitive salary range of $162K – $180K, including generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI Finance ensures the organization is positioned for long-term success as we pursue our mission.</p>\n<p>The Revenue team plays a critical role in enabling OpenAI to scale its commercial offerings—overseeing billing operations, deal desk, revenue systems, and revenue accounting. We work cross-functionally with Technical Revenue, Finance Data, and Revenue Systems teams to support complex commercial arrangements, improve operational efficiency, and maintain financial integrity.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Revenue Accounting Manager, you will be a key contributor to the month-end close cycle, build and monitor internal controls, and help drive process improvement and automation across our revenue accounting function. This role has a particular focus on consumer revenue operationalization—owning the close, reconciliations, and flux analysis for automated revenue streams, while ensuring system changes, data flows, and accounting outcomes remain accurate, controlled, and ASC 606-compliant as the business scales.</p>\n<p>We’re looking for a strategic operator who thrives in fast-paced, cross-functional environments, brings strong close discipline, and is excited to help strengthen the infrastructure and rigor that underpins OpenAI’s growth.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Lead key components of the monthly revenue close process for consumer revenue streams, ensuring accuracy, completeness, and timeliness.</li>\n</ul>\n<ul>\n<li>Own reconciliations for key consumer revenue accounts and related balance sheet accounts—investigate discrepancies, drive issues to resolution, and strengthen the underlying processes.</li>\n</ul>\n<ul>\n<li>Own flux analysis for consumer revenue and related accounts—identify anomalies, investigate root causes, and partner with StratFin and cross-functional teams to explain variances and drivers.</li>\n</ul>\n<ul>\n<li>Manage and train extended workforce resources to scale execution, including quality standards, documentation requirements, and close discipline.</li>\n</ul>\n<ul>\n<li>Design, document, and maintain strong internal controls over consumer revenue processes, including scalable evidence and audit-ready support.</li>\n</ul>\n<ul>\n<li>Partner with Revenue Systems, Finance Systems, Finance Data, and Order-to-Cash teams to translate product and system logic into accounting flows, ensuring consistency with ASC 606—especially in automated environments.</li>\n</ul>\n<ul>\n<li>Monitor and manage system/process changes impacting consumer revenue (new product features, pricing/packaging changes, billing logic updates, refunds/credits, data pipeline changes), ensuring accounting impacts are assessed and operationalized.</li>\n</ul>\n<ul>\n<li>Support accounting assessments for new or evolving consumer revenue arrangements, including documentation of key judgments and operational readiness for close.</li>\n</ul>\n<ul>\n<li>Support internal and external audit requests, including documentation of account control matrices, audit testing support, and remediation of findings.</li>\n</ul>\n<ul>\n<li>Drive continuous improvement through process optimization, automation enhancements, and standardization (e.g., close checklists, templates, scripts, workflow tools).</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>6+ years of accounting experience, ideally in a SOX-controlled environment, with strong revenue accounting exposure.</li>\n</ul>\n<ul>\n<li>A CPA (or equivalent) and deep knowledge of ASC 606.</li>\n</ul>\n<ul>\n<li>Comfort performing flux analysis and translating variances into clear, decision-useful explanations.</li>\n</ul>\n<ul>\n<li>Experience designing and operating internal controls, including audit-ready documentation and evidence standards.</li>\n</ul>\n<ul>\n<li>Strong systems and data fluency, including comfort working with large datasets and partnering with data/systems teams.</li>\n</ul>\n<ul>\n<li>Experience supporting system implementations or process automation in a revenue or order-to-cash context.</li>\n</ul>\n<ul>\n<li>The ability to operate in ambiguity—scoping work, prioritizing risk, and building scalable processes as needs evolve.</li>\n</ul>\n<ul>\n<li>Experience with Oracle Fusion ERP and system implementations.</li>\n</ul>\n<ul>\n<li>A passion for technology and artificial intelligence.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f0ab10d6-81a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/b2c29043-cbd7-440a-b70b-2de3676776ef","x-work-arrangement":"Hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$162K – $180K","x-skills-required":["ASC 606","Revenue Accounting","Internal Controls","Flux Analysis","Oracle Fusion ERP","System Implementations"],"x-skills-preferred":["Artificial Intelligence","Cloud Computing","Data Analytics"],"datePosted":"2026-03-08T22:16:54.428Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"ASC 606, Revenue Accounting, Internal Controls, Flux Analysis, Oracle Fusion ERP, System Implementations, Artificial Intelligence, Cloud Computing, Data Analytics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":162000,"maxValue":180000,"unitText":"YEAR"}}}]}