Scaling Down: Running Large Sites Locally
At Eventbrite we have a moderately complex site - lots of different services, datastores, caches and other moving parts. It's all managed by our wonderful Ops team in production, and scaling up there is a problem we're working on, but what about scaling down? How do you get over a hundred moving parts to run on hundreds of developer laptops with limited RAM and big demands on productivity and development speed?
We'll go through how we developed a custom development environment based around Docker containers and a Python-based tool called
bay that manages not only what containers to run but also how to interlink them, how changes propagate through the system, and how to keep up with all the changes streaming in from outside.
Andrew is a member of the Django core team, a Senior Engineer at Eventbrite, and was the original author of the Django migrations framework, its predecessor South, and now writes and maintains the Channels framework. His work mostly focuses on software and systems architecture, and he enjoys nothing more than making systems and tools that other engineers can use to get their jobs done more easily. In his spare time, he enjoys piloting small planes, archery, and attempting to visit every one of the USA's National Parks.