Skip to content

Building a Gigascale ML Feature Store with Redis, Binary Serialization, String Hashing, and Compression

When a company with millions of consumers such as DoorDash builds machine learning (ML) models, the amount of feature data can grow to billions of records with millions actively retrieved during model inference under low latency constraints.

Improving Online Experiment Capacity by 4X with Parallelization and Increased Sensitivity

Data-driven companies measure real customer reactions to determine the efficacy of new product features, but the inability to run these experiments simultaneously and on mutually exclusive groups significantly slows down development.

Integrating a Search Ranking Model into a Prediction Service

As companies utilize data to improve their user experiences and operations, it becomes increasingly important that the infrastructure supporting the creation and maintenance of machine learning models is scalable and will enable high productivity.