11 Implementation Evaluation & Learning Flashcards

(30 cards)

1
Q

What is implementation evaluation?

A

The systematic assessment of how an intervention is delivered, used, and sustained.

Focuses on process and use; distinct from impact evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is implementation evaluation different from performance monitoring?

A

Because its purpose is learning, not accountability.

Explains variation and supports adaptation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the primary purpose of implementation evaluation?

A

To inform course correction during implementation.

Real-time learning and design refinement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does formative evaluation mean?

A

Evaluation conducted to improve implementation while it is ongoing.

Early and iterative; prevents late surprises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does summative evaluation mean?

A

Evaluation conducted to judge overall success after implementation.

Retrospective and outcome-focused; too late to fix design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is formative evaluation especially important in implementation?

A

Because systems adapt as you intervene.

Drift occurs and workarounds emerge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What question should evaluation answer early on?

A

“Is this being used as intended?”

Focus on adoption, fidelity, and acceptability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is “Did it work?” the wrong early evaluation question?

A

Because effects depend on implementation quality.

No use → no effect; poor use → misleading effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What makes evaluation actionable?

A

Clear linkage between findings and decisions.

If no decision follows, evaluation failed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why do dashboards often fail implementation teams?

A

Because they show numbers without interpretation.

No causal story and no phase awareness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What role does qualitative data play in evaluation?

A

It explains why quantitative patterns occur.

Reveals mechanisms and surfaces hidden costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is mixed-methods evaluation powerful?

A

Because numbers and narratives answer different questions.

Quant = what and how much; Qual = why and how.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a common evaluation mistake early in implementation?

A

Measuring distal outcomes too soon.

Lagging indicators and confounded results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How should evaluation change across phases?

A

Focus shifts as implementation matures.

Early: adoption and feasibility; Mid: fidelity and acceptability; Late: sustainment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is evaluation inseparable from Theory of Change?

A

Because evaluation tests causal assumptions.

Which links hold and which links break.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does “learning orientation” mean in evaluation?

A

Treating data as guidance, not judgement.

Curiosity over blame; adaptation over defence.

17
Q

Why do teams hide or downplay negative findings?

A

Because evaluation is tied to performance judgement.

Reputation risk and funding pressure.

18
Q

How does evaluation support adaptation decisions?

A

By identifying where friction concentrates.

Drop-off points and drift indicators.

19
Q

What is the danger of over-measurement?

A

Measurement burden that slows implementation.

Data collection fatigue and reduced goodwill.

20
Q

Why is evaluation timing as important as what you measure?

A

Because systems change over time.

Early volatility and later stabilisation.

21
Q

What is “evaluation drift”?

A

When measures lose relevance as implementation evolves.

Old questions and new realities.

22
Q

How can evaluation unintentionally distort behaviour?

A

By incentivising metric optimisation over real improvement.

Gaming and surface compliance.

23
Q

Why should evaluation findings be shared quickly?

A

Because delayed feedback reduces learning value.

Fast loops and timely adjustment.

24
Q

What does “closing the loop” in evaluation mean?

A

Acting on findings and checking effects.

Decision → change → reassessment.

25
Why is evaluation critical for **sustainment**?
Because drift and decay are gradual. ## Footnote Early warning signals and maintenance triggers.
26
How does evaluation support **scale-up**?
By revealing which elements travel well and which don’t. ## Footnote Core robustness and context sensitivity.
27
What evaluation mindset supports **OHFE credibility**?
Non-defensive, system-focused inquiry. ## Footnote Avoids blame and enables redesign.
28
Why is evaluation often resisted by **frontline staff**?
Because it is experienced as surveillance. ## Footnote Past misuse and punitive culture.
29
What signals healthy implementation **learning**?
Regular adjustments based on evidence. ## Footnote Changing strategies and revising assumptions.
30
In one line, what is **implementation evaluation** for practitioners?
A disciplined way to learn whether change is working—and how to make it work better. ## Footnote Focuses on continuous improvement.