Practical Pitfalls Flashcards

(39 cards)

1
Q

What is data leakage in ML and statistical modeling?

A

Using information during training or feature creation that would not be available at prediction time, leading to overly optimistic evaluation and poor deployment performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is target leakage specifically?

A

A type of data leakage where features directly or indirectly encode the target variable using information from the future or post-outcome events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is target leakage particularly dangerous?

A

It can make models appear extremely accurate in offline evaluation, only to collapse when deployed because the leaked information is absent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an example of target leakage in a credit risk model?

A

Including a feature like ‘loan written off’ or ‘days delinquent after default date’ when predicting default at application time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can cross-validation be misused to create leakage?

A

By computing feature transformations, scaling, or imputations on the full dataset before splitting, so information from validation folds influences training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is look-ahead bias in time-series modeling?

A

Using data from the future in training or evaluation when simulating predictions that would have been made in the past.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do you avoid look-ahead bias in time-series evaluation?

A

Use chronological splits where training uses only past data and validation/test use future periods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is using the test set repeatedly for model tuning a pitfall?

A

It effectively turns the test set into another validation set, causing optimistic bias in reported performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the correct role of the test set in ML experiments?

A

To provide a final, unbiased estimate of performance after all model selection and tuning decisions are complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is selection bias in datasets?

A

Bias introduced when the observed data are not a random sample from the target population, often due to the way data are collected or filtered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can selection bias affect model performance in production?

A

Models trained on biased samples may perform poorly when deployed to a broader or different population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is covariate shift?

A

A change between training and deployment in the distribution of input features while the conditional distribution of outputs given inputs remains the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is label shift?

A

A change in the distribution of labels across environments with relatively stable conditional distribution of inputs given labels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is ignoring distribution shift a pitfall?

A

Models evaluated only under the training distribution may fail when real-world conditions change, leading to unexpected degradation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is class imbalance and why is it problematic?

A

When one class is much more frequent than others; naive models can achieve high accuracy by predicting the majority class while failing on the minority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What metric-related mistake is common with imbalanced data?

A

Relying on accuracy instead of metrics like precision, recall, F1, or PR-AUC that focus on the minority class.

17
Q

What is overfitting in the context of model complexity?

A

When a model fits noise and idiosyncrasies in the training data, achieving low training error but high test error.

18
Q

Why is evaluating a very flexible model on a tiny validation set a pitfall?

A

Random noise in the small validation set can mislead model selection, making unstable models look best by chance.

19
Q

What is the danger of p-hacking in statistical analysis?

A

Testing many hypotheses or analysis variants and only reporting significant ones inflates the false positive rate and undermines trust in results.

20
Q

Why is ‘p<0.05’ not adequate evidence on its own?

A

It ignores effect size, uncertainty, prior plausibility, multiple testing, and costs/benefits; context is essential.

21
Q

What is a common misinterpretation of a 95% confidence interval?

A

Believing there is a 95% probability that the true parameter lies in this specific interval, rather than understanding it as a long-run coverage property.

22
Q

Why is extrapolating far beyond the range of training data risky?

A

Model relationships that hold within the observed range may not hold outside it, leading to wildly inaccurate predictions.

23
Q

What is label noise?

A

Errors or inconsistencies in target labels, such as misclassifications or ambiguous outcomes.

24
Q

How can heavy label noise affect model performance and evaluation?

A

It can cap achievable accuracy, cause models to overfit spurious patterns, and distort metrics if not accounted for.

25
What pitfall arises from ignoring missing data mechanisms?
Assuming missingness is random when it depends on unobserved factors can bias estimates and model predictions.
26
What is MCAR vs MAR vs MNAR in missing data?
Missing Completely At Random, Missing At Random (depends only on observed data), and Missing Not At Random (depends on unobserved values).
27
Why is deleting all rows with missing values often a bad idea?
It can waste data and introduce bias if missingness is related to the outcome or covariates.
28
What is data snooping in feature engineering?
Looking at test or validation labels while creating features or selecting variables, thereby bleeding information from evaluation sets into training.
29
Why is scaling features on the whole dataset before splitting a subtle pitfall?
It uses statistics from test/validation data inside training transformations, introducing mild leakage and optimistic estimates.
30
What is multiple comparisons in ML model search?
Training many models or evaluating many metrics and then cherry-picking the best result without accounting for the number of tried alternatives.
31
How can ensembling be misused as a pitfall?
Blindly stacking many models without understanding their diversity and error correlation can lead to opaque systems that are hard to debug or govern.
32
Why is ignoring uncertainty in metrics a practical pitfall?
Treating noisy metric differences as real improvements may cause unnecessary model churn and degraded performance in production.
33
What is a train/test contamination through temporal ordering?
Randomly splitting time-dependent data so that future events appear in the training set or leak into feature calculations for past test instances.
34
Why is ignoring business constraints a statistical pitfall?
A model that optimizes metrics but violates latency, fairness, or operational constraints will not succeed in real deployment.
35
What is survivorship bias in datasets?
Bias that occurs when only entities that 'survive' some process are observed, missing those that dropped out or failed earlier.
36
How can survivorship bias distort models?
They may learn from only the successes or survivors, misestimating risk or performance for new cases.
37
Why is blindly trusting default library settings a pitfall?
Defaults may not suit your data’s scale, distribution, or problem type, leading to suboptimal or misleading results if not examined.
38
What is a key sign that a result may be 'too good to be true'?
Extremely high metrics compared to baseline or domain expectations, especially if accompanied by complex pipelines and minimal scrutiny for leakage.
39
In one sentence, what is the overarching lesson about practical statistical pitfalls?
Be suspicious of surprisingly good results, keep evaluation data sacred, respect time and data-generating processes, and always ask how your data and setup could be lying to you.