{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/retool"},"x-facet":{"type":"skill","slug":"retool","display":"Retool","count":8},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4aeaf45c-6c9"},"title":"Automation and Retool Engineering Lead","description":"<p>At bunq, we&#39;re not just building a banking app; we&#39;re reshaping how people around the world experience financial freedom. As our Automation and Retool Engineering Lead, you will be the force multiplier for our entire operation. Your mission is to empower our internal teams with world-class tooling and automation, enabling them to provide instant and effective help to our users and multiply their own productivity.</p>\n<p>As the Automation and Retool Engineering Lead, you will own the health, strategy, and velocity of our internal tooling and automation ecosystem.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead, mentor, and grow a high-performing team of Retool and automation engineers, fostering a culture of quality, speed, and continuous improvement.</li>\n<li>Own the technical roadmap and architectural vision for our internal tools stack (including Retool, n8n, and other low-code platforms), ensuring it remains scalable, secure, and bug-free.</li>\n<li>Constantly challenge the status quo by identifying operational bottlenecks and driving the development of new automations and tools that have a measurable impact on business efficiency and user satisfaction.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Proven experience in people leadership, with a strong track record of hiring, mentoring, and developing engineering teams to achieve high performance.</li>\n<li>A solid technical foundation in frontend development, JavaScript, and/or low-code engineering, with hands-on experience using tools like Retool and n8n.</li>\n<li>Exceptional project management skills, with the ability to translate high-level business goals into a prioritized technical roadmap and hold your team accountable for delivery.</li>\n<li>A strategic mindset with the ability to assess complex technical ecosystems, identify risks, and make critical decisions about tooling and architecture.</li>\n<li>Excellent communication skills in English, with the ability to collaborate effectively with operational teams, product leaders, and other stakeholders to get it done.</li>\n</ul>\n<p>Your space to perform We give you the space and the tools you need to succeed</p>\n<p>Benefits:</p>\n<ul>\n<li>Great, international colleagues who share your mindset.</li>\n<li>Hybrid setup: after 3 months in-office, work 2 days remote, 3 days in-office weekly.</li>\n<li>We support growth with bunq Academy and €1500 annual learning budget.</li>\n<li>A massive discount with Urban Sports Club for your wellbeing (in the Netherlands).</li>\n<li>A Multisport gym card for your health and wellbeing (in Turkey).</li>\n<li>Flex Benefits: €70 monthly budget via Re: benefit, offering access to 150+ perks tailored to your lifestyle.</li>\n<li>Travel expenses are covered whether you come walking or by bike, bus or car (though we prefer green choices ) (In Turkey and Netherlands).</li>\n<li>Digital Nomad Program: After your first year, enjoy up to 20 days per year to work while traveling, combining flexibility with strong team collaboration.</li>\n<li>We reward tenure with a dedicated travel budget: €1.5k after 2 years and €3k after 4 years to visit another core office.</li>\n<li>A MacBook so you can Get Shit Done with us.</li>\n<li>Delicious lunches from our fabulous in-house chefs with vegan and vegetarian options.</li>\n<li>An optional pension plan with monthly contribution from bunq (in Netherlands).</li>\n<li>Private health insurance, just in case (in Turkey or Bulgaria).</li>\n<li>Monthly contribution to your phone and internet bills (in Turkey and Netherlands).</li>\n<li>Friday drinks and other celebrations - bunq style.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4aeaf45c-6c9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"bunq","sameAs":"https://careers.bunq.com","logo":"https://logos.yubhub.co/careers.bunq.com.png"},"x-apply-url":"https://careers.bunq.com/o/automation-and-retool-engineering-lead","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["people leadership","frontend development","JavaScript","low-code engineering","project management","communication skills"],"x-skills-preferred":["Retool","n8n"],"datePosted":"2026-04-19T13:28:52.213Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"people leadership, frontend development, JavaScript, low-code engineering, project management, communication skills, Retool, n8n"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1fa6d45d-1b7"},"title":"Senior Software Engineer, United Kingdom","description":"<p>We are hiring Software Engineers to accelerate our mission. At KoBold, software engineers have the unique opportunity to embed directly with their users and learn the ins and outs of mineral exploration and geology while developing state-of-the-art technology solutions.\\n\\nUnlike traditional software engineering roles, we don&#39;t simply ship code and passively wait for feedback about its utility: our userbase includes our colleagues... and ourselves!\\n\\nWhile there are real technical challenges in making mineral exploration data broadly searchable and accessible to both humans and machines, we believe that solving these technical challenges cannot be done without &quot;getting our hands dirty&quot; – sometimes literally! – by embedding directly with the exploration teams and even occasionally (~once a year) joining our colleagues in the field, be it in Zambia, Canada, or Arizona, to experience the impact of our software in real time.\\n\\nAs a Software Engineer on the Data Systems Engineering team at KoBold, your main role will be to enable systematic exploration and materially improve exploration success rates by making mineral exploration data broadly accessible to humans and machines.\\n\\nPast projects have included SIP (the Structured Ingest Pipeline), DataKit generation (producing curated sets of data on demand), and RAG (Retrieval Augmentation Generation, utilizing natural language processing on unstructured data).\\n\\nOur tech stack is primarily python and includes Django, React, AWS, and additional technologies like Retool and Prefect.\\n\\nYour work will empower KoBold to unlock invaluable insights and streamline intricate scientific processes.\\n\\nCollaborating with our exceptional team of data scientists, geologists, and other software engineers, you will have the opportunity to tackle complex problems head-on and collectively pave the way for the discoveries of vital energy transition metals like lithium, copper, nickel, and cobalt.\\n\\nTogether we can shape the future of mineral exploration and contribute to building a sustainable world.\\n\\nThis role will be responsible for:\\n\\nDeep engagement with exploration geologists and data scientists, continual learning about mineral exploration, and tailoring technology development to the needs of exploration project scientists\\n\\nBuilding data pipelines and tooling for deriving advanced human and machine insights from exploration data, often leading a small group of software engineers to successful delivery\\n\\nDeveloping expertise in KoBold&#39;s Data Systems and deeply understanding how they impact exploration\\n\\nEnd-to-end ownership of projects from design to implementation and testing to continued engagement with colleagues on exploration teams using your solutions\\n\\nResponding well to design and code feedback, also providing feedback to teammates\\n\\nOperationally managing the team&#39;s services and assisting scientific colleagues with our tooling\\n\\nQualifications:\\n\\n4+ years of software engineering experience, ideally building production cloud data systems\\n\\nProficiency with Python\\n\\nAbility to write production-quality code that is correct, readable, well-tested, scalable and extensible\\n\\nSkilled in large-scale system design\\n\\nA track record of taking ownership from definition of the problem and delivering projects with demonstrated impact in an iterative manner\\n\\nIntellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain.\\n\\nEnjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.\\n\\nAbility to explain technical problems to and collaborate on solutions with domain experts who are not software developers.\\n\\nA strong communicator who enjoys working with colleagues across the company.\\n\\nExcitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on an evolving range of responsibilities.\\n\\nKeen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.\\n\\nNice to Haves:\\n\\nExperience with modern frontend frameworks such as React\\n\\nExperience with geospatial data and building map-based experiences\\n\\nFamiliarity with containerization and container orchestration platforms, such as Docker, AWS ECS, Kubernetes, etc.\\n\\nFormal education or job exposure to natural sciences</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1fa6d45d-1b7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"KoBold","sameAs":"https://www.kobold.com/","logo":"https://logos.yubhub.co/kobold.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/koboldmetals/jobs/4678367005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$120,000 - $210,000 USD","x-skills-required":["Python","Django","React","AWS","Retool","Prefect","Geospatial data","Containerization","Container orchestration"],"x-skills-preferred":["Modern frontend frameworks","Geospatial data and map-based experiences","Containerization and container orchestration platforms"],"datePosted":"2026-04-18T15:55:22.022Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Django, React, AWS, Retool, Prefect, Geospatial data, Containerization, Container orchestration, Modern frontend frameworks, Geospatial data and map-based experiences, Containerization and container orchestration platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5061631-dc9"},"title":"Backend Engineer, Reporting Systems - Contract 6mo","description":"<p>We&#39;re seeking a skilled Backend Engineer to support our Accounting and Reporting Systems. This contract role is essential to building APIs, managing crypto asset data, and delivering actionable insights for our asset operations team.</p>\n<p>As a Backend Engineer, you&#39;ll focus on data extraction from various crypto platforms, data normalization, and optimizing data accessibility for portfolio management, smart contract vesting, and counterparty exposure.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain APIs to extract crypto asset data from counterparties and block explorers.</li>\n<li>Ensure high reliability, performance, and security across all data pipelines.</li>\n<li>Utilize no-code tools like Retool to build internal dashboards and data interfaces.</li>\n</ul>\n<p>Data Management and Normalization:</p>\n<ul>\n<li>Normalize raw data to ensure accuracy and consistency across systems.</li>\n<li>Implement scalable data storage and retrieval solutions.</li>\n</ul>\n<p>Query and Reporting Optimization:</p>\n<ul>\n<li>Write and optimize complex SQL and NoSQL queries to support robust reporting.</li>\n<li>Ensure data is easily queryable for portfolio insights and operations analysis.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with teams across asset operations, finance, and investments to understand data needs.</li>\n<li>Build dashboards and reports that provide visibility into smart contract vesting schedules, counterparty exposures, and portfolio positions.</li>\n</ul>\n<p>Documentation and Compliance:</p>\n<ul>\n<li>Maintain clear documentation for APIs, data models, and reporting tools.</li>\n<li>Ensure compliance with data protection and processing standards.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, or a related field.</li>\n<li>3–5 years of experience in backend engineering roles. Strong expertise in SQL and relational databases.</li>\n<li>Proven experience designing and managing APIs. Familiarity with blockchain technologies, smart contracts, and decentralized finance.</li>\n<li>Ability to build backend-powered data visualizations and reporting interfaces.</li>\n<li>Resourceful and solutions-oriented; comfortable in fast-paced, ambiguous environments.</li>\n<li>Passion for cryptocurrency and blockchain technology.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Google Cloud (BigTable) or AWS.</li>\n<li>Prior work in the crypto/blockchain industry.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5061631-dc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Polychain Capital","sameAs":"https://www.polychain.com/","logo":"https://logos.yubhub.co/polychain.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/polychaincapital/jobs/6885321","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"contract","x-salary-range":"$150/hour (dependent on experience)","x-skills-required":["API design and development","Blockchain technologies","Smart contracts","Decentralized finance","SQL and relational databases","No-code tools like Retool","Data visualization and reporting interfaces"],"x-skills-preferred":["Google Cloud (BigTable)","AWS","Prior work in the crypto/blockchain industry"],"datePosted":"2026-04-17T12:52:54.740Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - San Francisco"}},"jobLocationType":"TELECOMMUTE","employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Finance","skills":"API design and development, Blockchain technologies, Smart contracts, Decentralized finance, SQL and relational databases, No-code tools like Retool, Data visualization and reporting interfaces, Google Cloud (BigTable), AWS, Prior work in the crypto/blockchain industry"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_886118d3-6a1"},"title":"Senior Data Engineer - Data Engineering","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>\n<p>The main goal of the DE team in 2024-25 is to build robust golden data sets to power our business goals of creating more insights-based products. Making data-driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data.</p>\n<p>Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool to orchestrate data pipelines and define workflows.</p>\n<p>We work with engineers, product managers, business intelligence, data analysts, and many other teams to build Plaid&#39;s data strategy and a data-first mindset.</p>\n<p>Our engineering culture is IC-driven -- we favor bottom-up ideation and empowerment of our incredibly talented team.</p>\n<p>We are looking for engineers who are motivated by creating impact for our consumers and customers, growing together as a team, shipping the MVP, and leaving things better than we found them.</p>\n<p>You will be in a high-impact role that will directly enable business leaders to make faster and more informed business judgments based on the datasets you build.</p>\n<p>You will have the opportunity to carve out the ownership and scope of internal datasets and visualizations across Plaid which is a currently unowned area that we intend to take over and build SLAs on.</p>\n<p>You will have the opportunity to learn best practices and up-level your technical skills from our strong DE team and from the broader Data Platform team.</p>\n<p>You will collaborate with and have strong and cross-functional partnerships with literally all teams at Plaid from Engineering to Product to Marketing/Finance etc.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles.</li>\n</ul>\n<ul>\n<li>Have data quality and performance top of mind while designing datasets</li>\n</ul>\n<ul>\n<li>Leading key data engineering projects that drive collaboration across the company.</li>\n</ul>\n<ul>\n<li>Advocating for adopting industry tools and practices at the right time</li>\n</ul>\n<ul>\n<li>Owning core SQL and python data pipelines that power our data lake and data warehouse.</li>\n</ul>\n<ul>\n<li>Well-documented data with defined dataset quality, uptime, and usefulness.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale.</li>\n</ul>\n<ul>\n<li>You&#39;ve have experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)</li>\n</ul>\n<ul>\n<li>You value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow.</li>\n</ul>\n<ul>\n<li>You have experience working with different performant warehouses and data lakes; Redshift, Snowflake, Databricks.</li>\n</ul>\n<ul>\n<li>You have experience building and maintaining batch and real-time pipelines using technologies like Spark, Kafka.</li>\n</ul>\n<ul>\n<li>You appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data.</li>\n</ul>\n<ul>\n<li>You are excited to try out new technologies. You like to produce proof-of-concepts that balance technical advancement and user experience and adoption.</li>\n</ul>\n<ul>\n<li>You like to get deep in the weeds to manage, deploy, and improve low-level data infrastructure.</li>\n</ul>\n<ul>\n<li>You are empathetic working with stakeholders. You listen to them, ask the right questions, and collaboratively come up with the best solutions for their needs while balancing infra and business needs.</li>\n</ul>\n<ul>\n<li>You are a champion for data privacy and integrity, and always act in the best interest of consumers.</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>\n<p>We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>\n<p>We are always looking for team members that will bring something unique to Plaid!</p>\n<p>Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics.</p>\n<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws.</p>\n<p>Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_886118d3-6a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/022278b3-0943-44b3-a54b-1de421017589","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$286,800 per year","x-skills-required":["SQL","Python","DBT","Airflow","Redshift","ElasticSearch","Atlanta","Retool","Spark","Kafka"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:52:06.845Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"SQL, Python, DBT, Airflow, Redshift, ElasticSearch, Atlanta, Retool, Spark, Kafka","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":286800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1d4c773-5c5"},"title":"Analytics Engineer, Finance","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Finance Data team is embedded within the CFO Org and is responsible for building internal data products that scale analytics across business teams and drive efficiencies in our daily operations. This team provides technical guidance on high-impact, scalable projects across Finance, and is the subject-matter expert in financial and transactional data that supports our Finance day-to-day operations.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Analytics Engineer, you will be setting the foundation to scale analytics across our business functions and impart best data practices for a rapidly growing organization. We aspire to build the Finance team of the future.</p>\n<p>In addition, you will work collaboratively with key stakeholders in Finance and other business teams to understand their pain points and take the lead in proposing viable, future-proof solutions to resolve them. You will also autonomously lead your own projects that deliver business impact and help cultivate a mature data culture among Finance teams.</p>\n<p>We are looking for a seasoned engineer who has a proven track record of owning the entire data stack at high transaction volume companies, managing business critical ETL pipelines consumed by non-technical teams. As a generalist “fixer”, you may be deployed across several different Finance domains (e.g. Tax datamart, ERP migration, Procurement automation). For this role we need someone who excels in dynamic environments, adapts quickly to changing needs, and confidently navigates ambiguous or evolving requirements. If you&#39;re energized by solving technical problems without a playbook and comfortable wearing multiple hats, this role is for you! To clarify, you will <strong>not</strong> be responsible for training ML models and neither would we describe this role as ‘product analytics’.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Understand the data needs of Finance teams, including Revenue, Tax, Procurement, Compute &amp; Infrastructure Accounting, Strategic Finance, and translate that scope into technical requirements</li>\n</ul>\n<ul>\n<li>Facilitate the development of data products and tools to for stakeholders to self-service and enable analytics to scale across the company</li>\n</ul>\n<ul>\n<li>Lead dimensional design - define, own, and maintain business facing data marts</li>\n</ul>\n<ul>\n<li>Be a cross-functional champion at upholding high data integrity standards and SLAs for the timely delivery of data</li>\n</ul>\n<ul>\n<li>Build and maintain insightful and reliable dashboards to track both operational and financial Metrics for the Executive team</li>\n</ul>\n<ul>\n<li>Contribute to the future roadmap of the Finance team from a data systems perspective</li>\n</ul>\n<ul>\n<li>Grow to be an expert in Finance Data and OpenAI’s data architecture</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>7+ years of experience as an Analytics Engineer or in a similar role (Data Analyst or Data Engineer) with a proven track record in shipping canonical datasets</li>\n</ul>\n<ul>\n<li>Empathy towards non-developer stakeholders and their day-to-day pain points</li>\n</ul>\n<ul>\n<li>Strong proficiency in SQL for data transformation, comfort in at least one functional/OOP language such as Python or R</li>\n</ul>\n<ul>\n<li>Familiarity with managing distributed data stores (e.g. S3, Trino, Hive, Spark), and experience building multi-step ETL jobs coupled with orchestrating workflows (e.g. Airflow, Dagster)</li>\n</ul>\n<ul>\n<li>Experience in writing unit tests to validate data products and version control (e.g. GitHub, Stash)</li>\n</ul>\n<ul>\n<li>Expert at creating compelling data visualizations with dashboarding tools (e.g. Tableau, Looker or similar)</li>\n</ul>\n<ul>\n<li>Excellent communication skills and ability to present data-driven narratives in both verbal and written form to a non-technical audience</li>\n</ul>\n<ul>\n<li>Experience solving ambiguous problem statements in an early stage environment</li>\n</ul>\n<p><strong>You could be an especially great fit if you have:</strong></p>\n<ul>\n<li>Prior experience leading the development of an internal production tool, serving hundreds of cross-functional customers such as Billing Operations, Deal Desk or Go-to-Market teams</li>\n</ul>\n<ul>\n<li>Some frontend experience with React, TypeScript, Retool, Streamlit, or building web apps</li>\n</ul>\n<ul>\n<li>Good understanding of Spark and ability to write, debug, and optimize Spark jobs</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1d4c773-5c5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7cd50a19-65f2-4a52-89a2-512130e58c5c","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$198K – $260K • Offers Equity","x-skills-required":["SQL","Python","R","S3","Trino","Hive","Spark","Airflow","Dagster","GitHub","Stash","Tableau","Looker"],"x-skills-preferred":["React","TypeScript","ReTool","Streamlit","Web development"],"datePosted":"2026-03-08T22:16:37.388Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, R, S3, Trino, Hive, Spark, Airflow, Dagster, GitHub, Stash, Tableau, Looker, React, TypeScript, ReTool, Streamlit, Web development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3eca7e36-aa0"},"title":"Support Systems Architect","description":"<p><strong>Support Systems Architect</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$216K – $240K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The User Operations team is central to ensuring that our customers&#39; experience with our products is nothing short of exceptional. We resolve complex issues, provide technical guidance, and support customers in maximizing value and adoption from deploying our products. We work closely with Sales, Technical Success, Product, Engineering and others to deliver the best possible experience to our customers at scale. OpenAI&#39;s customers represent a range of diverse backgrounds and maturity, from early-stage startups to established global enterprises. Given OpenAI’s already breakneck shipping cadence – and the expectation that it will only accelerate – our ability to architect scalable systems for support readiness, user‑feedback, and broader program delivery is central to our ability to build world-class products and to maintain exceptional support quality.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a systems‑minded builder who will design, prototype, implement, and iterate on the tooling, data flows, and processes that allows the User Operations team to redefine a modern support organization. Think: automated launch checklists, content and knowledge pipelines, incident detection and evaluators, and other processes that power a User Operations team operating at an unprecedented scale. You’d be building resilient systems, not better slide decks.</p>\n<p>We’re looking for people who thrive at the intersection of project management, systems building, data science/data engineering/software engineering, team enablement, and customer advocacy – and enjoy working cross-functionally in a fast-paced, evolving environment.</p>\n<p>This role is based in San Francisco, California. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Build “Day-1 enabled” workflows; role-tailored playbooks, content auto-diffs from source docs, and other workflows that have been taken for granted in typical Support organizations.</li>\n</ul>\n<ul>\n<li>Continuously automate repetitive touchpoints with scripts, agents, and LLM-powered flows; implement governance, observability, evaluation gates, and safe rollback.</li>\n</ul>\n<ul>\n<li>Codify detection (windowing, dedupe, thresholds), on-call handoffs, and post-incident learning loops that protect customer experience and SLAs.</li>\n</ul>\n<ul>\n<li>Prototype and learn quickly—leveraging ChatGPT, Jupyter notebooks, Retool, and other tools—to prove value before hardening with Engineering.</li>\n</ul>\n<ul>\n<li>Stand up data pipelines that capture sentiment, ticket trends, and BPO insights, routing actionable signals back to Product within hours—not weeks.</li>\n</ul>\n<ul>\n<li>Identify risks and challenges during tooling rollouts, proposing solutions that safeguard customer experience and service levels.</li>\n</ul>\n<ul>\n<li>Continuously automate, replacing every repetitive touchpoint with scripts, agents, or LLM-powered flows.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 8+ years of experience in building tools for internal teams, especially within a customer support environment.</li>\n</ul>\n<ul>\n<li>Have shipped or maintained tools and automations (dashboards, ETL pipelines, low-code apps) that eliminated manual work and scaled beyond a single team.</li>\n</ul>\n<ul>\n<li>Treat ChatGPT &amp; LLMs as default co-developers, rapidly turning natural-language ideas into working code or queries.</li>\n</ul>\n<ul>\n<li>Deeply enjoy working cross-functionally and are skilled at building relationships with Product, Engineering, and Operations teams.</li>\n</ul>\n<ul>\n<li>Are passionate about customer advocacy and have experience translating customer feedback into strategic product insights.</li>\n</ul>\n<ul>\n<li>Possess a strong bias for automation and a distaste for doing low-complexity to otherwise repetitive work consistently.</li>\n</ul>\n<ul>\n<li>Thrive in a fast-moving, ambiguous environment where priorities will shift quickly and iterating on your systems will be required.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose technologies like artificial intelligence benefit all of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3eca7e36-aa0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/782db0f6-a03d-4f9f-9eb2-191831b37939","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216K – $240K • Offers Equity","x-skills-required":["ChatGPT","LLMs","Jupyter notebooks","Retool","project management","systems building","data science","data engineering","software engineering","team enablement","customer advocacy"],"x-skills-preferred":["automated launch checklists","content and knowledge pipelines","incident detection and evaluators","data pipelines","sentiment analysis","ticket trends","BPO insights"],"datePosted":"2026-03-06T18:36:31.397Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ChatGPT, LLMs, Jupyter notebooks, Retool, project management, systems building, data science, data engineering, software engineering, team enablement, customer advocacy, automated launch checklists, content and knowledge pipelines, incident detection and evaluators, data pipelines, sentiment analysis, ticket trends, BPO insights","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2916efa5-82d"},"title":"Data Scientist, Support","description":"<p><strong>Data Scientist, Support</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $255K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p><strong>About the Team</strong></p>\n<p>The User Operations team is central to ensuring that our customers&#39; experience with our products is nothing short of exceptional. We resolve complex issues, provide technical guidance, and support customers in maximizing value and adoption from deploying our products. We work closely with Sales, Technical Success, Product, Engineering and others to deliver the best possible experience to our customers at scale. OpenAI&#39;s customers represent a range of diverse backgrounds and maturity, from early-stage startups to established global enterprises. Given OpenAI’s breakneck shipping cadence and growth—and the expectation that it will only accelerate—transforming our rich support data into real‑time insights and scalable, self‑serve analytics is critical to sustaining exceptional customer experiences on the path to AGI.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re seeking a Support Data Scientist who will dig deep into user‑support data—surfacing trends, volumes, and friction signals—and turn these findings into actionable insights and always‑on reporting. You’ll design, build, and maintain self‑serve dashboards that keep every stakeholder informed in real time, partnering closely with Data Science and Engineering to ensure clean pipelines, robust models, and scalable tooling. Think proactive friction detection and real‑time service‑health views that help us stay ahead of demand—delivering decision‑grade insights, not just prettier slide decks.</p>\n<p>This role is based in San Francisco, California. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Explore large support and product datasets to uncover trends, volume drivers, and user‑experience pain points, distilling findings into clear, actionable narratives.</li>\n</ul>\n<ul>\n<li>Build, enhance, and maintain self‑serve dashboards and reporting tools, enabling non‑technical teams to answer their own data questions.</li>\n</ul>\n<ul>\n<li>Establish a unified metrics taxonomy for service‑health and performance—and build automated data‑sharing pipelines and scorecards with our BPO partners to ensure everyone operates from the same real‑time view of success</li>\n</ul>\n<ul>\n<li>Leverage LLMs to build bespoke classifiers that automatically label and segment inbound volumes—powering smarter routing, richer self‑serve insights, and swifter root‑cause analysis.</li>\n</ul>\n<ul>\n<li>Partner with Data Engineering to ensure reliable pipelines, implement data‑quality checks, and document sources of truth.</li>\n</ul>\n<ul>\n<li>Jump into high‑priority special projects to conduct bespoke deep‑dive analyses and deliver clear, strategic recommendations to leadership.</li>\n</ul>\n<ul>\n<li>Prototype quickly—leveraging ChatGPT, Jupyter notebooks, Retool, and other tools—to prove value before hardening with Engineering.</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science on predictive models and experimentation, translating results into operational recommendations.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>8 + years in analytics, business intelligence, or data science (experience with customer support or operations teams is a plus).</li>\n</ul>\n<ul>\n<li>Expert‑level SQL skills and proficiency in Python or R for advanced analysis and automation</li>\n</ul>\n<ul>\n<li>Hands‑on experience designing and maintaining BI dashboards (e.g. Looker, Mode, Tableau, Sundial) with a focus on clarity and self‑serve usability.</li>\n</ul>\n<ul>\n<li>Hands‑on experience fine‑tuning or prompt‑engineering LLMs to build text classifiers, sentiment analysis, or tagging systems.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to translate complex datasets into clear business stories and recommendations for both technical and non‑technical audiences.</li>\n</ul>\n<ul>\n<li>Familiarity with support metrics (SLAs, FCR, deflection) and ability to define service health KPIs.</li>\n</ul>\n<ul>\n<li>Strong cross‑functional communication skills—comfortable collaborating daily with engineers, data scientists, and operations leaders.</li>\n</ul>\n<ul>\n<li>An eye for detail, a zero‑defect mindset, and a bias to</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2916efa5-82d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/1f37ae5b-791a-4505-9575-183cc4bb9d5e","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $255K","x-skills-required":["SQL","Python","R","BI dashboards","LLMs","Data Science","Data Engineering","Predictive models","Experimentation"],"x-skills-preferred":["ChatGPT","Jupyter notebooks","Retool","Looker","Mode","Tableau","Sundial"],"datePosted":"2026-03-06T18:32:13.138Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, R, BI dashboards, LLMs, Data Science, Data Engineering, Predictive models, Experimentation, ChatGPT, Jupyter notebooks, Retool, Looker, Mode, Tableau, Sundial","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":255000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_509e3a3b-0fb"},"title":"ASIC Physical Design, Sr Staff","description":"<p>Opening. This role is a key member of the Interface IP Design Methodology team, working with global teams to define best practice ASIC design standards and flows. The team is responsible for next-generation SerDes and Memory interface controllers, PHYs, and subsystems.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Develop a complete front-to-back end design implementation methodology (RTL to GDSII) using Synopsys&#39; best in class tools and technologies.</p>\n<p>Work with leading edge designs and teams to drive the industry best PPA for IP designs.</p>\n<p>Evaluate and exercise various aspects of the development flow which may include design for test logic, synthesis, place &amp; route, timing and power (incl. EM/IR) optimization and analysis.</p>\n<p>Develop and maintain best in class digital design methodologies, including documentation, scripts, and training materials.</p>\n<p>Work as a liaison between EDAG tool and IP design teams.</p>\n<p>Continuously improve and refine design processes to enhance efficiency and performance.</p>\n<p><strong>What you need</strong></p>\n<p>BS or MS in EE with 10+ years of hands-on experience developing high-speed digital IP cores and/or SOCs.</p>\n<p>Knowledge of IP deliverables, ASIC implementation and physical design flow and tools, memories, logic libraries, and PDK versions.</p>\n<p>Direct hands-on experience with Fusion Compiler or industry equivalent Synthesis and Place &amp; Route tools.</p>\n<p>Ability to facilitate cross-functional collaboration, including fostering innovation, improving communication, and driving results.</p>\n<p>Good analysis, debugging, and problem-solving skills.</p>\n<p>Solid written and verbal communication skills and the ability to create clear and concise documentation and provide trainings.</p>\n<p>Familiarity with other Synopsys tools (Primetime, PrimePower, RLTA, CoreTools) is a plus.</p>\n<p>Working knowledge of high-speed interface protocols such as HDMI, MIPI, PCIe, SATA, Ethernet, USB, DP, and DDR is a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_509e3a3b-0fb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/hyderabad/asic-physical-design-sr-staff/44408/91568840304","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["BS or MS in EE","10+ years of hands-on experience developing high-speed digital IP cores and/or SOCs","Knowledge of IP deliverables, ASIC implementation and physical design flow and tools, memories, logic libraries, and PDK versions","Direct hands-on experience with Fusion Compiler or industry equivalent Synthesis and Place & Route tools","Ability to facilitate cross-functional collaboration","Good analysis, debugging, and problem-solving skills","Solid written and verbal communication skills"],"x-skills-preferred":["Familiarity with other Synopsys tools (Primetime, PrimePower, RLTA, CoreTools)","Working knowledge of high-speed interface protocols such as HDMI, MIPI, PCIe, SATA, Ethernet, USB, DP, and DDR"],"datePosted":"2026-02-11T16:09:21.948Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"BS or MS in EE, 10+ years of hands-on experience developing high-speed digital IP cores and/or SOCs, Knowledge of IP deliverables, ASIC implementation and physical design flow and tools, memories, logic libraries, and PDK versions, Direct hands-on experience with Fusion Compiler or industry equivalent Synthesis and Place & Route tools, Ability to facilitate cross-functional collaboration, Good analysis, debugging, and problem-solving skills, Solid written and verbal communication skills, Familiarity with other Synopsys tools (Primetime, PrimePower, RLTA, CoreTools), Working knowledge of high-speed interface protocols such as HDMI, MIPI, PCIe, SATA, Ethernet, USB, DP, and DDR"}]}