{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/rightsizing"},"x-facet":{"type":"skill","slug":"rightsizing","display":"Rightsizing","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d34ee930-f5c"},"title":"Cloud Platform Engineer","description":"<p>We are seeking a Cloud Platform Engineer to join our team. As a Cloud Platform Engineer, you will be responsible for designing and implementing cloud-native database infrastructure using Terraform /Ansible to provision managed DB instances in multi-clouds (RDS/Azure DB /Cloud SQL) and self-managed clusters.</p>\n<p>You will also be responsible for automating Configuration Management, security hardening, and patching of database instances across all environments. Automate workflows to reduce manual effort and improve reliability.</p>\n<p>In addition, you will develop internal tools and scripts (Python/Bash) to enable production support teams to manage their own database instances and environments safely. Develop scripts for routine operational tasks like backups, health checks, etc.</p>\n<p>You will integrate advanced observability platforms (Dynatrace, CloudWatch) with AIOps tools to establish SLOs and train models for anomaly detection and proactive forecasting of database degradation like predicting slow queries or imminent connection pool exhaustion).</p>\n<p>You will design, deploy, and govern AI-powered agents (using Azure Copilot /AWS Bedrock) to achieve autonomous self-healing capabilities and automated resource management.</p>\n<p>You will implement advanced monitoring (CloudWatch, Dynatrace) for key database metrics (SLIs/SLOs) like latency, throughput, error rates, and connection pools. Develop and train predictive ML models to analyze historical telemetry and forecast potential system outages or performance bottlenecks and configure proactive monitoring and alerting for critical services.</p>\n<p>You will respond to alerts and create self-healing actions based on alerts.</p>\n<p>You will design and implement cross-region/multi-AZ replication, automated failover strategies, and point-in-time recovery (PITR) procedures for mission-critical databases. Disaster recovery planning and DR drills.</p>\n<p>You will execute backup strategies and validate recovery procedures using Rubrik and Perform restores as needed.</p>\n<p>You will work closely with application operations / Production support teams to troubleshoot issues on database layer (performance, locks, schema) and the platform layer (multi-cloud /middleware /network, resource limits) to find the root causes.</p>\n<p>You will lead incident response and root cause analysis (RCA) for database outages, performance degradations, and data integrity issues. Collaborate with DBAs and application teams for root cause analysis.</p>\n<p>You will implement AI tools to perform real-time Root Cause Analysis (RCA), correlate complex event data (logs, metrics) and auto-generate runbooks.</p>\n<p>You will define and automate scaling strategies (read replicas, sharding, auto-scaling) based on predicted load and business growth. Provide input for capacity planning and resource optimization.</p>\n<p>You will implement cost management policies, including rightsizing instances, managing storage tiers, and defining lifecycle rules for backups and snapshots.</p>\n<p>You will proactively analyze query performance, index usage, and database configuration, making and automating changes to optimize throughput and reduce latency. Support DBA teams in performance tuning initiatives.</p>\n<p>You will implement robust secrets management solutions (AWS Secrets Manager, HashiCorp Vault) for database credentials, ensuring applications retrieve secrets securely at runtime.</p>\n<p>You will define and enforce least-privilege access policies (IAM roles, service accounts) for databases.</p>\n<p>You will implement encryption and data masking policies as directed.</p>\n<p>You will manage security and compliance by utilizing AI agents to detect configuration drift and auto-generate compliant updates for IAM, network, and security policies.</p>\n<p>You will apply patches and perform upgrades in coordination with DBA teams. Validate post-upgrade functionality and compliance.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d34ee930-f5c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/us-en/about-us/who-we-are/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/aNTGp9AN6h4GPQ6Vrak2GZ/hybrid-cloud-platform-engineer-in-pune-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Oracle","DB2","MSSQL","Snowflake","PostgreSQL","MySQL","Terraform","Ansible","Python","Bash","Dynatrace","CloudWatch","Azure Copilot","AWS Bedrock","Rubrik","AI/ML","Cloud Native","Database Administration","Configuration Management","Security Hardening","Patching","Observability Platforms","AIOps Tools","Autonomous Self-Healing","Resource Management","Advanced Monitoring","Predictive ML Models","Proactive Monitoring","Alerting","Cross-Region/Multi-AZ Replication","Automated Failover Strategies","Point-in-Time Recovery","Disaster Recovery Planning","DR Drills","Backup Strategies","Recovery Procedures","Application Operations","Production Support Teams","Root Cause Analysis","Incident Response","AI Tools","Runbooks","Scaling Strategies","Capacity Planning","Resource Optimization","Cost Management Policies","Rightsizing Instances","Storage Tiers","Lifecycle Rules","Query Performance","Index Usage","Database Configuration","Secrets Management Solutions","Least-Privilege Access Policies","Encryption","Data Masking Policies","Security Compliance","Configuration Drift","Compliant Updates"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:17:12.465Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Oracle, DB2, MSSQL, Snowflake, PostgreSQL, MySQL, Terraform, Ansible, Python, Bash, Dynatrace, CloudWatch, Azure Copilot, AWS Bedrock, Rubrik, AI/ML, Cloud Native, Database Administration, Configuration Management, Security Hardening, Patching, Observability Platforms, AIOps Tools, Autonomous Self-Healing, Resource Management, Advanced Monitoring, Predictive ML Models, Proactive Monitoring, Alerting, Cross-Region/Multi-AZ Replication, Automated Failover Strategies, Point-in-Time Recovery, Disaster Recovery Planning, DR Drills, Backup Strategies, Recovery Procedures, Application Operations, Production Support Teams, Root Cause Analysis, Incident Response, AI Tools, Runbooks, Scaling Strategies, Capacity Planning, Resource Optimization, Cost Management Policies, Rightsizing Instances, Storage Tiers, Lifecycle Rules, Query Performance, Index Usage, Database Configuration, Secrets Management Solutions, Least-Privilege Access Policies, Encryption, Data Masking Policies, Security Compliance, Configuration Drift, Compliant Updates"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8516ca2f-5df"},"title":"Data Science Engineer, Capacity & Efficiency","description":"<p><strong>About the Role</strong></p>\n<p>As a member of the Compute team, you will play a critical role in Anthropic&#39;s mission of building safe and beneficial AI by ensuring we understand, optimize, and strategically manage our cloud infrastructure spend. Your work will directly impact how efficiently we operate our multi-cloud and datacenter footprint, from forecasting infrastructure needs and planning capacity, to driving utilization improvements and reducing unit costs across our compute, storage, and networking resources.</p>\n<p>You will work closely with Compute Finance, Infrastructure Engineers, and Product to translate raw cloud billing data into actionable efficiency insights and influence capacity planning &amp; allocation. You will help build deep visibility into our infrastructure spend, forecast capacity needs, attribute costs accurately across teams and workloads, model resource demand curves, and help identify efficiency opportunities across our fleet.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build and maintain cloud cost attribution models that accurately allocate infrastructure spend (compute, accelerators, storage, networking, data transfer) across teams, products, and workloads, providing clear visibility into who is spending what and why.</li>\n</ul>\n<ul>\n<li>Build and maintain cost of revenue pipelines and models</li>\n</ul>\n<ul>\n<li>Partner with infrastructure, finance, and procurement stakeholders to analyse utilization patterns, identify inefficiencies, and drive optimization initiatives that improve the cost-effectiveness of our non-accelerator cloud resources.</li>\n</ul>\n<ul>\n<li>Develop forecasting models for non-accelerator infrastructure demand, incorporating business growth projections, product roadmaps, and historical spend trends to enable proactive capacity planning and budget accuracy.</li>\n</ul>\n<ul>\n<li>Define and track unit cost metrics (e.g., cost per request, cost per GB stored, cost per pipeline run) and identify opportunities to reduce them, influencing infrastructure and engineering roadmaps with data-driven recommendations.</li>\n</ul>\n<ul>\n<li>Develop unit cost economics for various workloads and applications, and using the metrics to drive efficiency efforts across product and infrastructure teams.</li>\n</ul>\n<ul>\n<li>Build a cost-aware culture across the organisation by creating self-serve dashboards, automated reporting, and accessible datasets that give engineering and finance teams clear visibility into cloud spend and efficiency metrics.</li>\n</ul>\n<p><strong>You might be a good fit if you have:</strong></p>\n<ul>\n<li>6+ years of experience in data science, analytics, or FinOps roles, with a focus on cloud infrastructure cost analysis, capacity planning, or efficiency optimisation.</li>\n</ul>\n<ul>\n<li>Experience building spend forecasting models and large-scale cost attribution systems.</li>\n</ul>\n<ul>\n<li>Deep knowledge of cloud billing systems, cost allocation methodologies, and spend optimisation levers (e.g., reserved instances, committed use discounts, rightsizing, spot/preemptible usage).</li>\n</ul>\n<ul>\n<li>A passion for the company&#39;s mission of building helpful, honest, and harmless AI.</li>\n</ul>\n<ul>\n<li>Expertise in Python, SQL, forecasting, data modelling and data visualisation tools.</li>\n</ul>\n<ul>\n<li>A bias for action and urgency, not letting perfect be the enemy of the effective.</li>\n</ul>\n<ul>\n<li>A strong disposition to thrive in ambiguity, taking initiative to create clarity and forward progress.</li>\n</ul>\n<ul>\n<li>A deep curiosity and energy for pulling the thread on hard questions.</li>\n</ul>\n<ul>\n<li>Experience in turning open questions and data into concise and insightful analysis.</li>\n</ul>\n<ul>\n<li>Highly effective written communication and presentation skills.</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8516ca2f-5df","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5125881008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$275,000 - $370,000 USD","x-skills-required":["cloud infrastructure cost analysis","capacity planning","efficiency optimisation","Python","SQL","forecasting","data modelling","data visualisation"],"x-skills-preferred":["reserved instances","committed use discounts","rightsizing","spot/preemptible usage"],"datePosted":"2026-03-08T13:59:33.909Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, NY; San Francisco, CA; Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud infrastructure cost analysis, capacity planning, efficiency optimisation, Python, SQL, forecasting, data modelling, data visualisation, reserved instances, committed use discounts, rightsizing, spot/preemptible usage","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":275000,"maxValue":370000,"unitText":"YEAR"}}}]}