{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/unity-catalog"},"x-facet":{"type":"skill","slug":"unity-catalog","display":"Unity Catalog","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c323c93-132"},"title":"DATA Plateform & DevOps Engineer","description":"<p>We are looking for a Databricks Platform Administrator / Data Platform Engineer responsible for supporting the administration, governance, and operational management of the Databricks platform. The role focuses on ensuring platform stability, security, cost efficiency, and enabling data teams through standardized processes and automation.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Support the administration of Databricks environments (DEV, TEST, PROD)</li>\n<li>Manage user access, roles, and permissions</li>\n<li>Support identity provisioning via Azure AD / AWS IAM</li>\n<li>Collaborate with cloud teams on connectivity, storage access, and security topics</li>\n</ul>\n<ul>\n<li>Manage access to notebooks, jobs, clusters, and SQL Warehouses</li>\n<li>Support cluster configuration and usage policies</li>\n<li>Troubleshoot platform-related issues and coordinate with support teams</li>\n<li>Maintain operational documentation and internal guidelines</li>\n</ul>\n<ul>\n<li>Support Unity Catalog governance (catalogs, schemas, tables)</li>\n<li>Manage access controls and data ownership</li>\n<li>Ensure platform security standards and audit readiness</li>\n<li>Contribute to compliance and governance processes</li>\n</ul>\n<ul>\n<li>Support CI/CD processes for Databricks deployments</li>\n<li>Participate in automation initiatives using Git-based workflows</li>\n<li>Contribute to standard templates and deployment practices</li>\n<li>Support lifecycle management across environments</li>\n</ul>\n<ul>\n<li>Monitor platform usage and identify optimization opportunities</li>\n<li>Support cost tracking and reporting activities</li>\n<li>Contribute to tagging and governance practices for cost attribution</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Master’s degree in Computer Science, Software Engineering, Data Engineering, or a related technical field</li>\n<li>Minimum 3 years of experience in Databricks platform administration, preferably in an Azure environment</li>\n<li>Good understanding of CI/CD concepts and Git-based workflows</li>\n<li>Experience working in cloud-based data platforms or DevOps environments is a plus</li>\n<li>Experience with Databricks administration (workspace management, access management, clusters, Unity Catalog is a plus)</li>\n<li>Knowledge of Azure cloud services (preferred) or other cloud providers</li>\n<li>Familiarity with Infrastructure as Code concepts (Terraform is a plus)</li>\n<li>Scripting knowledge (Python or Bash is a plus)</li>\n<li>Databricks certifications (ex: Databricks Certified Data Engineer, Databricks Platform Administrator)</li>\n<li>Cloud certifications such as Azure Data Engineer, AWS Solutions Architect, or equivalent</li>\n<li>Strong English communication skills (written and spoken)</li>\n<li>Good analytical and troubleshooting skills</li>\n<li>Ability to work in a collaborative and international environment</li>\n<li>Proactive mindset and structured way of working</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>\n<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>\n<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>\n<li>A team where your input and technical decisions truly matter.</li>\n<li>A competitive package and benefits.</li>\n</ul>\n<p>Join us and make a direct impact on shaping the future of Data, AI, and Mobility.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c323c93-132","directApply":true,"hiringOrganization":{"@type":"Organization","name":"AVL","sameAs":"https://jobs.avl.com","logo":"https://logos.yubhub.co/jobs.avl.com.png"},"x-apply-url":"https://jobs.avl.com/job/Sala-Al-Jadida-DATA-Plateform-&-DevOps-Enginner/1383237733/","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive package and benefits","x-skills-required":["Databricks","Azure","Git","CI/CD","Infrastructure as Code","Terraform","Scripting","Python","Bash","Cloud certifications"],"x-skills-preferred":["Unity Catalog","Azure Data Engineer","AWS Solutions Architect"],"datePosted":"2026-04-22T17:34:47.209Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sala Al Jadida"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Databricks, Azure, Git, CI/CD, Infrastructure as Code, Terraform, Scripting, Python, Bash, Cloud certifications, Unity Catalog, Azure Data Engineer, AWS Solutions Architect"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b6468802-409"},"title":"Senior Staff Software Engineer - Unity Catalog Runtime Enforcement","description":"<p>We are seeking a Senior Staff Software Engineer to lead the Unity Catalog Runtime Enforcement team. As a key member of our engineering team, you will be responsible for developing and hardening the runtime enforcement layer for Unity Catalog, ensuring secure, consistent authorization and data access across Databricks compute, engines, and clouds.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Lead and grow an engineering team delivering runtime enforcement outcomes in a high-severity, cross-org domain; establish scope, SLAs, and phased roadmaps.</li>\n<li>Establish single-source-of-truth scope, operating model, and durable mechanisms for enforcement.</li>\n<li>Lead multi-year, multi-team initiatives that shape how Databricks enforces Unity Catalog at runtime across compute types and engines.</li>\n<li>Introduce tools to allow greater automation and operability of services.</li>\n<li>Use your deep experience to help prevent and investigate production issues.</li>\n<li>Plan and lead complicated technical projects that work with several teams within the company.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>15+ years industry experience building and supporting large-scale distributed systems.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Extensive experience building and maintaining distributed systems.</li>\n<li>Security first mindset.</li>\n<li>Cross-org leadership in ambiguous, incident-heavy environments; disciplined rollout and ops maturity.</li>\n<li>Motivated by delivering customer value and impact.</li>\n<li>Experience driving company initiatives towards customer satisfaction.</li>\n<li>BS/MS/PhD in Computer Science or related majors, or equivalent experience.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b6468802-409","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8422477002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["large-scale distributed systems","Unity Catalog","runtime enforcement","authorization","data access"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:40.880Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale distributed systems, Unity Catalog, runtime enforcement, authorization, data access"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6d7f1a0-882"},"title":"Resident Solutions Architect - Mumbai","description":"<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>\n<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>\n<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>\n<li>Providing expert-level technical guidance and support to customers during the implementation process</li>\n<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>\n<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>\n<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>\n<li>Comfortable writing code in Python, PySpark, and Scala</li>\n<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>\n<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>\n<li>Expertise in Azure</li>\n<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>\n<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>\n<li>Ability to work with Partner Organization and deliver complex programs</li>\n<li>Ability to lead large technical delivery teams</li>\n<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>\n<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>\n<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>\n<li>Willingness to travel for onsite customer engagements within India</li>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<p>Good-to-have Skills:</p>\n<ul>\n<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>\n<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>\n<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>\n<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>\n<li>Expertise on cloud platforms like AWS and GCP</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6d7f1a0-882","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8107166002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Data Lakes","Python","PySpark","Scala","SQL","Spark SQL","Spark-streaming","Azure","NoSQL databases","data governance","security","Unity Catalog","RBAC"],"x-skills-preferred":["ML libraries/frameworks","MLOps tools and processes","LLM and GenAI solutions","Hadoop","Trino","Ranger","AWS","GCP"],"datePosted":"2026-04-18T15:45:04.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mumbai, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_542096f5-82b"},"title":"Business Intelligence Manager","description":"<p>As a Business Intelligence Manager, you will play a critical role in building secure, interactive data and AI applications hosted natively on the Databricks platform. You will design, build, and maintain scalable data web applications, AI chatbots, and custom operational interfaces using frameworks like Streamlit, React, and FastAPI. By leveraging Databricks Apps&#39; serverless infrastructure, you will eliminate the need for external hosting and empower business users to make informed decisions by bridging the gap between raw data and solutions using your engineering prowess, Databricks apps, Databricks SQL, Lakebase and AgentBricks.</p>\n<p>The Impact You Will Have:</p>\n<ul>\n<li>Build: You will design and develop robust frontend interfaces and API backends (e.g., FastAPI routing user queries to model-serving endpoints). You will build solutions ranging from data-rich dashboards to enterprise chat solutions powered by the Mosaic AI Agent Framework.</li>\n</ul>\n<ul>\n<li>Architect: You will design secure and scalable application architectures that can suffice GTM requirements on building custom SaaS applications.</li>\n</ul>\n<ul>\n<li>Scale: You will create scalable applications that seamlessly connect to Databricks SQL via the Statement Execution API or Databricks SDK. You will establish CI/CD pipelines using Declarative Automation Bundles (DABs) to automate deployment across development, staging, and production workspaces.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>You have 5+ years of experience working as a Software Engineer, Data App Developer, or Full-Stack Engineer building interactive web applications.</li>\n</ul>\n<ul>\n<li>You are proficient in Python, DBSQL and/or Node.js. Experience with frameworks like Streamlit, Dash, Flask, FastAPI, React, or Express is required.</li>\n</ul>\n<ul>\n<li>You know the Databricks ecosystem. Familiarity with Unity Catalog, Databricks SQL, Databricks SDK for Python, and Model Serving is highly preferred.</li>\n</ul>\n<ul>\n<li>You have built for scale and security. Experience with CI/CD tools, Infrastructure as Code (specifically Databricks Asset Bundles), and implementing secure OAuth flows.</li>\n</ul>\n<ul>\n<li>You are passionate about applying AI. Experience integrating LLMs or Mosaic AI Agent Frameworks into application backends to deliver intelligent chat and RAG solutions.</li>\n</ul>\n<ul>\n<li>You excel in a collaborative environment. You can translate stakeholder requirements into intuitive user interfaces, working through dependencies and troubleshooting deployment errors or telemetry logs.</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_542096f5-82b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8501030002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$158,200-$217,450 USD","x-skills-required":["Python","DBSQL","Node.js","Streamlit","React","FastAPI","Unity Catalog","Databricks SQL","Databricks SDK for Python","Model Serving"],"x-skills-preferred":["CI/CD tools","Infrastructure as Code","OAuth flows","LLMs","Mosaic AI Agent Frameworks"],"datePosted":"2026-04-18T15:43:21.206Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, DBSQL, Node.js, Streamlit, React, FastAPI, Unity Catalog, Databricks SQL, Databricks SDK for Python, Model Serving, CI/CD tools, Infrastructure as Code, OAuth flows, LLMs, Mosaic AI Agent Frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":158200,"maxValue":217450,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6215398a-2c4"},"title":"Senior Software Engineer, Forward Deployed (U.S. Public Sector)","description":"<p><strong>About Invisible</strong></p>\n<p>Invisible Technologies makes AI work. Our end-to-end AI platform structures messy data, automates digital workflows, deploys agentic solutions, measures outcomes, and integrates human expertise where it matters most.</p>\n<p>Our platform cleans, labels, and structures company data so it is ready for AI. It adapts models to each business and adds human expertise when needed, the same approach we have used to improve models for more than 80% of the world’s top AI companies, including Microsoft, AWS, and Cohere.</p>\n<p>Our successes span industries, from supply chain automation for Swiss Gear to AI-enabled naval simulations with SAIC, and validating NBA draft picks for the Charlotte Hornets.</p>\n<p>Profitable for more than half a decade, Invisible reached $134M in revenue and ranked as the number two fastest growing AI company on the 2024 Inc. 5000. In September 2025, we raised $100M in growth capital to accelerate our mission of making AI actually work in the enterprise and to advance our platform technology.</p>\n<p><strong>About The Role</strong></p>\n<p>As a Senior Forward Deployed Engineer (FDE) for our U.S. Public Sector team at Invisible, you&#39;ll lead high-impact, AI-powered solutions that reshape how our clients operate their most critical workflows. You won’t just build and deploy — you’ll drive the strategy, architecture, and execution of end-to-end systems, working directly with client stakeholders and our internal delivery teams.</p>\n<p>This is a hybrid role: equal parts AI architect, hands-on engineer, and technical advisor. You’ll work on the front lines with ambitious clients, turning operational challenges into scalable AI workflows. You’ll be trusted to lead complex engagements, make architectural calls, and mentor others across technical and non-technical domains.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Scope, design, and lead implementation of AI-driven solutions in partnership with delivery teams and executive stakeholders</li>\n<li>Translate ambiguous workflows and business needs into repeatable systems and production-ready technical architectures</li>\n<li>Lead architecture design and trade-off discussions across performance, scalability, cost, and reliability</li>\n<li>Build usable systems from messy data and incomplete or evolving requirements</li>\n<li>Apply AI/ML solutions in highly regulated environments (e.g., defense, intelligence, healthcare, finance)</li>\n<li>Own projects end-to-end—from initial discovery and scoping through implementation, deployment, and post-launch iteration</li>\n<li>Build shared infrastructure, reusable components, and internal playbooks to improve delivery consistency and team velocity</li>\n<li>Mentor mid-level engineers and contribute to the development of forward-deployed AI engineering practices at Invisible</li>\n</ul>\n<p><strong>What We Need</strong></p>\n<ul>\n<li>Active U.S. Department of Defense Secret clearance or higher</li>\n<li>5+ years of software engineering experience, including work on data-intensive, ML, or backend systems</li>\n<li>Ability to work on-site 2–3 days per week at offices located in the greater Washington, D.C. and Reston, VA area</li>\n<li>Python &amp; ML/LLM frameworks: Hands-on experience with Python and modern ML/LLM tooling (e.g., Hugging Face, LangChain, OpenAI, Pinecone)</li>\n<li>Deployment &amp; infrastructure: Experience building and operating API-based services using Docker, FastAPI, Kubernetes, and major cloud platforms (AWS, GCP)</li>\n<li>Platform &amp; data management: Familiarity with workflow orchestration, pub/sub systems (e.g., Kafka), schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, and replay processes</li>\n<li>Experience leading requirements-gathering activities and translating stakeholder input into technical specifications</li>\n</ul>\n<p><strong>What’s In It For You</strong></p>\n<p>Invisible is committed to fair and competitive pay, ensuring that compensation reflects both market conditions and the value each team member brings. Our salary structure accounts for regional differences in cost of living while maintaining internal equity.</p>\n<p>For this position, the annual salary ranges by location are:</p>\n<p>Tier 2 Salary Range $164,000 – $240,000USD</p>\n<p>You can find more information about our geographic pay tiers here. During the interview process, your Invisible Talent Acquisition Partner will confirm which tier applies to your location. For candidates outside the U.S., compensation is adjusted to reflect local market conditions and cost of living.</p>\n<p>Bonuses and equity are included in offers above entry level. Final compensation is determined by a combination of factors, including location, job-related experience, skills, knowledge, internal pay equity, and overall market conditions. Because of this, every offer is unique. Additional details on total compensation and benefits will be discussed during the hiring process</p>\n<p><strong>What It&#39;s Like to Work at Invisible:</strong></p>\n<p>At Invisible, we’re not just redefining work—we’re reinventing it. We operate at the intersection of advanced AI and human ingenuity, pushing the boundaries of what’s possible to unlock productivity and scale. Ownership is at the core of everything we do. Here, you won’t just execute tasks—you’ll build, innovate, and shape the future alongside world-class clients pushing the boundaries of AI.</p>\n<p>We expect bold ideas, relentless drive, and the ability to turn ambiguity into opportunity. The pace is fast, the challenges are big, and the growth is unmatched. We’re not for everyone, and we’re okay with that. If you’re looking for predictable routines, this isn’t the place for you. But if you’re driven to create, thrive in dynamic environments, and want a front-row seat to the AI revolution, you’ll fit right in.</p>\n<p>_<strong>Country Hiring Guidelines:</strong>_ _Invisible is a hybrid organization with offices and team members located around the world. While some roles may offer remote flexibility, most positions involve in-office collaboration and are tied to specific locations. Any location-based requirements will be clearly outlined in the job description._</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6215398a-2c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Invisible Technologies","sameAs":"https://www.invisible.co/join-us/","logo":"https://logos.yubhub.co/invisible.co.png"},"x-apply-url":"https://job-boards.eu.greenhouse.io/invisibletech/jobs/4741723101","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$164,000 – $240,000USD","x-skills-required":["Python","ML/LLM frameworks","Docker","FastAPI","Kubernetes","AWS","GCP","workflow orchestration","pub/sub systems","schema governance","data contracts","Unity Catalog","Delta/ETL pipelines","replay processes"],"x-skills-preferred":["Hugging Face","LangChain","OpenAI","Pinecone"],"datePosted":"2026-03-06T12:12:41.818Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington DC–Baltimore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, ML/LLM frameworks, Docker, FastAPI, Kubernetes, AWS, GCP, workflow orchestration, pub/sub systems, schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, replay processes, Hugging Face, LangChain, OpenAI, Pinecone","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":164000,"maxValue":240000,"unitText":"YEAR"}}}]}