{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/streaming-pipelines"},"x-facet":{"type":"skill","slug":"streaming-pipelines","display":"Streaming Pipelines","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_460d00aa-b48"},"title":"Senior / Staff+ Software Engineer, Voice Platform","description":"<p>About the role</p>\n<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>\n<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>\n<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>\n</ul>\n<ul>\n<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>\n</ul>\n<ul>\n<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>\n</ul>\n<ul>\n<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>\n</ul>\n<ul>\n<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>\n</ul>\n<ul>\n<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>\n</ul>\n<ul>\n<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>\n</ul>\n<p>You may be a good fit if you</p>\n<ul>\n<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>\n</ul>\n<ul>\n<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>\n</ul>\n<ul>\n<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>\n</ul>\n<ul>\n<li>Are results-oriented, with a bias toward flexibility and impact</li>\n</ul>\n<ul>\n<li>Pick up slack, even if it goes outside your job description</li>\n</ul>\n<ul>\n<li>Enjoy pair programming (we love to pair!)</li>\n</ul>\n<ul>\n<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>\n</ul>\n<ul>\n<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>\n</ul>\n<p>Strong candidates may also have experience with</p>\n<ul>\n<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>\n</ul>\n<ul>\n<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>\n</ul>\n<ul>\n<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>\n</ul>\n<ul>\n<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>\n</ul>\n<ul>\n<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>\n</ul>\n<ul>\n<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>\n</ul>\n<p>Representative projects</p>\n<ul>\n<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>\n</ul>\n<ul>\n<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>\n</ul>\n<ul>\n<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>\n</ul>\n<ul>\n<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>\n</ul>\n<ul>\n<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_460d00aa-b48","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5172245008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$485,000 USD","x-skills-required":["Real-time media protocols and stacks","Audio engineering fundamentals","Low-latency ML inference serving","Distributed systems","Streaming pipelines","APIs"],"x-skills-preferred":["WebRTC","RTP","gRPC bidirectional streaming","WebSockets","Opus","AAC","Voice activity detection","Echo cancellation","Jitter buffering","Audio DSP","GPU-based serving infrastructure","Telephony","Live streaming","Video conferencing","Voice assistant platforms","Mobile audio pipelines on iOS","Android","Working alongside ML researchers"],"datePosted":"2026-04-18T15:59:54.712Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, Streaming pipelines, APIs, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, Voice activity detection, Echo cancellation, Jitter buffering, Audio DSP, GPU-based serving infrastructure, Telephony, Live streaming, Video conferencing, Voice assistant platforms, Mobile audio pipelines on iOS, Android, Working alongside ML researchers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9be280f4-cbc"},"title":"Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>\n<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>\n<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>\n<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9be280f4-cbc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008","x-work-arrangement":"onsite","x-experience-level":"entry|mid|senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["backend language (Python or Rust)","distributed compute frameworks (Apache Spark or Ray)","cloud infrastructure","data lake architectures","batch and streaming pipelines"],"x-skills-preferred":["Kafka","dbt","Terraform","Airflow","web crawler","deduplication","data mining","search","file formats and storage systems"],"datePosted":"2026-04-18T15:54:00.309Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_760c3e88-e35"},"title":"Senior Product Manager, Data","description":"<p>Job Title: Senior Product Manager, Data</p>\n<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>\n<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>\n<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>\n<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>\n<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>\n<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>\n<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>\n<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>\n<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>\n<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>\n<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>\n<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>\n<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>\n<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>\n<li>Awareness of data security, compliance, and governance best practices</li>\n<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>Salary Range: $143,000 to $210,000</p>\n<p>Benefits:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Workplace:</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_760c3e88-e35","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649824006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,000 to $210,000","x-skills-required":["data product management","data architecture","enterprise data engineering","data lakes","data warehouses","ETL/ELT and streaming pipelines","data governance frameworks","modern data stack technologies","Snowflake","BigQuery","Databricks","Apache Spark","Airflow","DBT","Kafka","data modeling","domain-driven design","scalable data platforms","BI and analytics platforms","Tableau","Looker","Power BI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:58.405Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":143000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_60aae9e8-e8b"},"title":"Software Engineer, Observability","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Observability team. As a member of this team, you will be responsible for designing and evolving logging, metrics, and tracing pipelines to handle massive data volumes. You will also evaluate and integrate new technologies to enhance Airtable&#39;s observability posture.</p>\n<p>Your responsibilities will include guiding and mentoring a growing team of infrastructure engineers, defining and upholding coding standards, partnering with other teams to embed observability throughout the development lifecycle, and owning end-to-end reliability for observability tools.</p>\n<p>You will also extend observability to LLM and AI features by instrumenting prompts, model calls, and RAG pipelines to capture latency, reliability, cost, and safety signals. You will design online and offline evaluation loops for LLM quality, build dashboards and alerts for token usage, error rates, and model performance, and connect these signals to tracing for prompt lineage.</p>\n<p>To succeed in this role, you will need 6+ years of software engineering experience, with 3+ years focused on observability or infrastructure at scale. You will also need demonstrated success implementing and running production-grade logging, metrics, or tracing systems, proficiency in distributed systems concepts, data streaming pipelines, and container orchestration, and deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse.</p>\n<p>This is a high-impact role that will allow you to lead the modernization of Airtable&#39;s observability stack, influence how every engineer monitors and debugs mission-critical systems, and drive major projects across engineering organization to build platform and services for solving observability problems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_60aae9e8-e8b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8400374002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Distributed systems concepts","Data streaming pipelines","Container orchestration","Prometheus","Grafana","Datadog","OpenTelemetry","ELK Stack","Loki","ClickHouse"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:22.779Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY; Remote (Seattle, WA only)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems concepts, Data streaming pipelines, Container orchestration, Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, ClickHouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e2392ba0-1bc"},"title":"Staff Engineer AI Agents","description":"<p>About Zuma</p>\n<p>Zuma is pioneering the future of agentic AI in property management. We build AI agents that act as property managers, handling the full spectrum of interactions with both prospects and current residents on behalf of our clients.</p>\n<p>Our agents don’t just assist human workflows; they own them end-to-end, operating across leasing, collections and resident communications. Zuma has ambitions to continue expanding into adjacent work activities in tangential areas of property management.</p>\n<p>This is a rare chance to shape the future of how an entire industry operates , not in theory, but in production, at scale, touching real customers and physical assets every day. At Zuma, human and AI agents work side by side, and you&#39;ll help define what that collaboration looks like at its best.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own E2E projects that cross all areas of software development including full stack web apps, agentic AI solutions across multiple work activities, extensive integrations with PMS and CRM systems, infrastructure, and internal tooling.</li>\n</ul>\n<ul>\n<li>Architect, build, and deploy production AI agents using modern agent frameworks, owning the full lifecycle from design to reliability in production.</li>\n</ul>\n<ul>\n<li>Define the technical patterns and standards for how software is built across the engineering org , you will be setting the playbook others follow.</li>\n</ul>\n<ul>\n<li>Strengthen our core systems , including our onboarding/configuration system, integration frameworks, and AI performance analytics infrastructure.</li>\n</ul>\n<ul>\n<li>Collaborate directly with the VPE and product leadership to translate product vision into delivery, making high-stakes technical trade-offs with confidence.</li>\n</ul>\n<ul>\n<li>Own system reliability, observability, and continuous improvement , defining how we measure, monitor, and iterate on our agents and web products in production.</li>\n</ul>\n<ul>\n<li>Work across the stack (backend services, LLM orchestration, integrations, data pipelines, frontends) to ship agents and products that are robust and scalable.</li>\n</ul>\n<ul>\n<li>Tame legacy code and lay down new foundations , patterns and architecture you create will be inherited by the engineers who come after you.</li>\n</ul>\n<ul>\n<li>Be a close partner to the product and operations teams, turning their domain needs into intelligent automated workflows without requiring domain expertise upfront.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Prior experience at a startup or high-growth company; comfort shipping fast and iterating in production.</li>\n</ul>\n<ul>\n<li>AWS experience with IaC (Terraform) and comfort working with infrastructure / dev ops.</li>\n</ul>\n<ul>\n<li>Background in building self-serve platforms or integration infrastructure.</li>\n</ul>\n<ul>\n<li>Experience with workflow automation platforms or business process orchestration.</li>\n</ul>\n<ul>\n<li>Experience with telephony integrations (Twilio or similar) and building voice-capable agents or chatbots across text and voice channels.</li>\n</ul>\n<ul>\n<li>Familiarity with speech-to-text, text-to-speech, or real-time audio streaming pipelines in production AI systems.</li>\n</ul>\n<ul>\n<li>Classical ML experience , supervised/unsupervised learning, feature engineering, model training and evaluation outside of LLM contexts.</li>\n</ul>\n<p><strong>Our Stack</strong></p>\n<ul>\n<li>Python, TypeScript/Node.js</li>\n</ul>\n<ul>\n<li>OpenAI, Anthropic</li>\n</ul>\n<ul>\n<li>LangGraph, OpenAI Agents SDK, custom orchestration layers</li>\n</ul>\n<ul>\n<li>AWS, AWS ECS, PostgreSQL, Redis</li>\n</ul>\n<ul>\n<li>RealPage, Entrata, Yardi, and other property management systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e2392ba0-1bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zuma","sameAs":"https://www.zuma.com/","logo":"https://logos.yubhub.co/zuma.com.png"},"x-apply-url":"https://jobs.lever.co/getzuma/16961f6d-ab02-469d-8f99-3a68bf5a5026","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180-220 per year","x-skills-required":["Python","TypeScript","OpenAI","Anthropic","LangGraph","OpenAI Agents SDK","AWS","AWS ECS","PostgreSQL","Redis","RealPage","Entrata","Yardi"],"x-skills-preferred":["AWS IaC (Terraform)","Infrastructure / Dev Ops","Self-serve platforms","Integration infrastructure","Workflow automation platforms","Business process orchestration","Telephony integrations (Twilio)","Voice-capable agents or chatbots","Speech-to-text","Text-to-speech","Real-time audio streaming pipelines","Classical ML"],"datePosted":"2026-04-17T13:12:33.765Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, OpenAI, Anthropic, LangGraph, OpenAI Agents SDK, AWS, AWS ECS, PostgreSQL, Redis, RealPage, Entrata, Yardi, AWS IaC (Terraform), Infrastructure / Dev Ops, Self-serve platforms, Integration infrastructure, Workflow automation platforms, Business process orchestration, Telephony integrations (Twilio), Voice-capable agents or chatbots, Speech-to-text, Text-to-speech, Real-time audio streaming pipelines, Classical ML"}]}