One Data Pipeline to Rule Them All

There are myriad data storage systems available for every use case imaginable, but letting application teams choose storage engines independently can lead to duplicated efforts and wheel reinvention. This talk will explore how to build a reusable data pipeline based on Kafka to support multiple applications, datasets, and use cases including archival, warehousing and analytics, stream and batch processing, and low-latency "hot" storage.

Presented by

Sam Kitajima-Kimbrel

Sam Kitajima-Kimbrel is a software engineer with many feels about distributed systems, data routing and storage, and usable APIs. He currently leads Twilio's Data Platform team, building scalable and reusable data infrastructure to support a 400-person R&D organization. Sam has a different hair color every month, enjoys cycling and cooking, and resides in the San Francisco Bay Area with his husband Kameron and their dogs Basil and Mochi.