We’ve traditionally relied on A/B testing at DoorDash to guide our decisions.
Category Archives: Data
Safeguarding app health and consumer experience with metric-aware rollouts
As part of our ongoing efforts to enhance product development while safeguarding app health and the consumer experience, we are introducing metric-aware rollouts for experiments.
Sharpening the Blur: Removing dilution to maximize experiment power
When it comes to reducing variance in experiments, the spotlight often falls on sophisticated methods like CUPED (Controlled Experiments Using Pre-Experiment Data).
API-First Approach to Kafka Topic Creation
DoorDash’s Engineering teams revamped Kafka Topic creation by replacing a Terraform/Atlantis based approach with an in-house API, Infra Service.
Transforming MLOps at DoorDash with Machine Learning Workbench
It is amusing for a human being to write an article about artificial intelligence in a time when AI systems, powered by machine learning (ML), are generating their own blog posts.
Leveraging Flink to Detect User Sessions and Engage DoorDash Consumers with Real-Time Notifications
At Doordash, we value every chance to boost order conversions in the app.
How DoorDash Standardized and Improved Microservices Caching
As DoorDash’s microservices architecture has grown, so too has the volume of interservice traffic.
Addressing the Challenges of Sample Ratio Mismatch in A/B Testing
Experimentation isn’t just a cornerstone for innovation and sound decision-making; it’s often referred to as the gold standard for problem-solving, thanks in part to its roots in the scientific method.
Using Metrics Layer to Standardize and Scale Experimentation at DoorDash
Metrics are vital for measuring success in any data-driven company, but ensuring that these metrics are consistently and accurately measured across the organization can be challenging.
Using CockroachDB to Reduce Feature Store Costs by 75%
While building a feature store to handle the massive growth of our machine-learning (“ML”) platform, we learned that using a mix of different databases can yield significant gains in efficiency and operational simplicity.