Ранжирование ленты новостей -- прохождение интервью¶
~4 минуты чтения
Предварительно: Компоненты системы | Определение задачи
Интервью по News Feed Ranking -- один из самых частых кейсов на ML System Design собеседованиях в FAANG (задаётся в ~40% всех MLSD раундов). Кандидат должен за 45--60 минут пройти путь от clarifying questions до trade-offs. Ключевой дифференциатор -- умение показать multi-objective thinking (engagement + quality + diversity) и знание реальных ограничений: 500K QPS, <200 мс total latency, position bias в training data. Кандидаты, которые фокусируются только на модели без системных аспектов, проваливают 70%+ таких интервью.
Interview Framework (45-60 min)¶
| Фаза | Время | Фокус |
|---|---|---|
| Clarifying questions | 0--5 мин | Платформа, масштаб, цели |
| High-level design | 5--15 мин | 3-stage pipeline |
| Deep dive | 15--30 мин | Ranking model, features |
| Trade-offs & challenges | 30--45 мин | Multi-objective, bias |
| Extensions | 45--60 мин | Cold start, ads, safety |
Step 1: Clarifying Questions (5 min)¶
**Platform:**
- What type? (social, news, professional)
- What content formats? (text, images, video)
- Social graph or interest-based?
**Scale:**
- DAU?
- Posts per day?
- Average friend/follow count?
**Objectives:**
- Primary metric? (time spent, engagement)
- Quality constraints? (misinformation, harassment)
- Ad integration?
Step 2: High-Level Design (10 min)¶
Architecture¶
graph TD
REQ["Feed Request"] --> PIPE["Feed Ranking Pipeline"]
subgraph PIPE_INNER["Pipeline"]
CG["Candidate<br/>Generation"] --> RM["Ranking<br/>Models"]
RM --> BP["Blending<br/>& Policy"]
end
PIPE --> PIPE_INNER
PIPE_INNER --> CS["Content Store"]
PIPE_INNER --> FS["Feature Store"]
PIPE_INNER --> UP["User Profile"]
style CG fill:#e8eaf6,stroke:#3f51b5
style RM fill:#e8eaf6,stroke:#3f51b5
style BP fill:#fff3e0,stroke:#ef6c00
style CS fill:#e8f5e9,stroke:#4caf50
style FS fill:#e8f5e9,stroke:#4caf50
style UP fill:#e8f5e9,stroke:#4caf50
Pipeline Stages¶
"Three main stages:
1. **Candidate Generation** (10K → 1000)
- Friends' posts (social graph)
- Followed pages/accounts
- Groups
- Recommendations (explore)
2. **Ranking** (1000 → 50)
- Multi-task model
- Predict: P(like), P(comment), P(share), watch_time
- Combine into single score
3. **Blending & Policy** (50 → final feed)
- Diversity enforcement
- Ad insertion
- Policy filtering (safety)
- Slot-based composition"
Step 3: Deep Dive (15 min)¶
Ranking Model¶
Multi-task learning for engagement prediction. Почему multi-task: разные типы engagement имеют разные сигналы, shared representation эффективнее, лучшая калибровка per objective.
graph TD
UF["User Features"] --> UE["Embedding"]
PF["Post Features"] --> PE["Embedding"]
CF["Context Features"] --> CE["Embedding"]
UE --> SL["Shared Layers"]
PE --> SL
CE --> SL
SL --> T1["P(like)"]
SL --> T2["P(comment)"]
SL --> T3["P(share)"]
style SL fill:#e8eaf6,stroke:#3f51b5
style T1 fill:#e8f5e9,stroke:#4caf50
style T2 fill:#e8f5e9,stroke:#4caf50
style T3 fill:#e8f5e9,stroke:#4caf50
Final Score = w1 x P(like) + w2 x P(comment) + w3 x P(share) + ...
Key Features¶
"Critical features for feed ranking..."
"1. **User-Creator Affinity**
- Interaction history (likes, comments)
- Social connection strength
- Time since last interaction
2. **Content Features**
- Post type (text, image, video)
- Text embeddings
- Image/video quality
- Content freshness
3. **Engagement Signals**
- Post's current engagement rate
- Viral velocity (growing fast?)
- Creator's avg engagement
4. **Context**
- Time of day
- User's session depth
- Device type
Cross features are key:
- User × Creator type affinity
- User × Content topic affinity
- User × Post format preference"
Diversity & Exploration¶
"Avoiding filter bubbles..."
"Problem:
- Only show content user engaged with before
- Creates echo chambers
- Reduces discovery
Solutions:
1. **MMR Re-ranking**
- Score = relevance - λ × max_similarity_to_selected
- Penalize similar items
2. **Slot-based Diversity**
- Reserve slots for different content types
- Slot 1-3: High relevance
- Slot 4: Different topic
- Slot 5: From less-engaged creator
3. **Exploration Budget**
- 5% of feed for exploration
- Random content from new creators
- Track engagement for learning"
Step 4: Trade-offs & Challenges (15 min)¶
Multi-Objective Optimization¶
"Balancing competing objectives..."
"Objectives:
1. Engagement (maximize)
2. Quality (maximize)
3. Freshness (balance)
4. Creator fairness (constraint)
5. User safety (hard constraint)
Approach:
1. Scalarization:
Score = Σ wi × objective_i
2. Constraints:
- Min quality threshold
- Max clickbait score
- Safety must pass
3. Pareto optimization:
- Find Pareto-optimal solutions
- Business chooses point on frontier"
Handling Viral Content¶
"Content can go viral in minutes..."
"Challenge:
- Model was trained on historical data
- Viral post has no history
- Must react quickly
Solutions:
1. **Real-time Signals**
- Stream engagement events
- Compute trending score
- Boost trending content
2. **Velocity Features**
- Engagement rate in last 1h
- Acceleration (rate of rate)
3. **Fast Retraining**
- Online learning for trending
- Hourly model updates"
Position Bias¶
"Users scroll from top..."
"Bias:
- Top positions get more engagement
- Training data reflects this bias
- Model overfits to position
Debiasing:
1. **Inverse Propensity Weighting**
- Weight = 1 / P(view | position)
2. **Position as Feature**
- Train with position
- Predict at position=1
3. **Randomization**
- Small % of random ranking
- Use for unbiased evaluation"
Step 5: Extensions (10 min)¶
Common Questions¶
Q: How to handle new users (cold start)?
"New user has no history:
1. Demographic-based recommendations
2. Popular content in their region
3. Onboarding interests selection
4. Rapid learning from first interactions
5. Social graph initialization (import contacts)"
Q: How to insert ads?
"Ads are part of the feed:
1. Separate ad ranking system
2. Fixed ad slots (every 5th position)
3. Relevance matching to organic content
4. Frequency caps per user
5. Quality constraints (no jarring experience)"
Q: Content safety?
"Filter harmful content:
1. Pre-screening before indexing
2. Real-time moderation signals
3. Hard filter in ranking (never show flagged)
4. Soft filter (demote borderline content)
5. User controls (hide, report)"
Interview Checklist¶
Must Cover:¶
- Three-stage pipeline
- Multi-task ranking model
- User-creator affinity features
- Diversity mechanisms
- Real-time updates for viral
Good to Cover:¶
- Position bias handling
- Multi-objective optimization
- Cold start
- Ad integration
- Content safety
Red Flags:¶
- Single objective (engagement only)
- No diversity consideration
- Ignoring content freshness
- Not mentioning viral handling
- Forgetting creator fairness
Sample Script¶
Interviewer: "Design Instagram feed ranking"
You: "Let me clarify - are we including Reels
and Stories, or just main feed?"
Interviewer: "Main feed with photos and videos"
You: "And the scale?"
Interviewer: "1 billion users, 200 million posts/day"
You: "Here's my approach:
[Draw architecture]
Three-stage pipeline:
1. Candidate generation from social graph
2. Multi-task ranking (like, comment, watch_time)
3. Blending with diversity and ads
For ranking, I'd use a multi-task neural network
predicting multiple engagement types.
Key features:
- User-creator affinity (interaction history)
- Content quality signals
- Real-time engagement velocity
For diversity, I'd use MMR re-ranking to avoid
showing all similar content.
For viral content, stream engagement events
and compute trending scores in real-time.
Shall I dive into any component?"
Типичные заблуждения¶
Заблуждение: на интервью нужно сразу рисовать архитектуру
Первые 5 минут -- clarifying questions. Кандидаты, которые пропускают этот этап, в 60%+ случаев проектируют систему под неправильные constraints. "Social network" и "news aggregator" -- совершенно разные ленты: social граф vs interest graph, 500 друзей vs 10M+ publisher posts. Без уточнения масштаба (100K DAU vs 2B DAU) pipeline может быть over-engineered или under-engineered.
Заблуждение: достаточно описать модель, чтобы получить strong hire
ML System Design -- это не ML моделирование. Интервьюеры оценивают 5 аспектов: (1) problem formulation, (2) data/features, (3) model architecture, (4) serving infrastructure, (5) trade-offs. Кандидат, который идеально описал модель, но не упомянул latency budget, position bias debiasing или diversity mechanisms, получает в лучшем случае lean hire. Системное мышление важнее quality модели.
Заблуждение: position bias -- это нерешаемая проблема
3 проверенных подхода: (1) Inverse Propensity Weighting -- вес sample = 1/P(view|position), убирает bias в training data; (2) Position as Feature -- тренируем с position feature, на inference подставляем position=1; (3) Randomization -- 1--5% трафика получают случайный порядок для сбора unbiased data. Facebook и YouTube используют комбинацию этих подходов и получают 2--5% improvement в offline метриках после debiasing.
Интервью¶
Как бы вы разделили время на 45-минутном MLSD интервью?
"Потрачу 30 минут на модель -- это самая важная часть"
"5 минут на clarifying questions (тип платформы, масштаб, objectives), 10 минут на high-level architecture (3-stage pipeline с конкретными latency бюджетами), 15 минут на deep dive (multi-task model, ключевые features, training pipeline), 10 минут на trade-offs (diversity vs engagement, position bias, viral content), 5 минут на extensions (cold start, ads, safety). Такое распределение покрывает все dimensions, которые оценивают интервьюеры."
Как вы обработаете viral content, который набирает 100K лайков за 10 минут?
"Перетренируем модель на свежих данных"
"Ретрейн занимает часы -- не подходит. Три подхода: (1) real-time streaming engagement events через Kafka, вычисляем velocity features (engagement rate за последний час, acceleration -- рост скорости engagement); (2) boosting score для контента с высокой velocity -- множитель 2--5x в post-ranking; (3) online learning для trending signal -- обновление lightweight модели каждые 15--30 минут. Ключевое: velocity features должны быть в Feature Store с <5 мс latency, обновляемые near-real-time."