The Problem
Do rewards change behaviour, or attract people who are already active?
Rewards programmes that integrate third-party lifestyle products face a core measurement challenge. Without a structured pilot and bias controls, any observed improvement in activity or retention cannot be cleanly attributed to the reward itself. This case study builds the analytics layer to evaluate that question for a hypothetical Vitality-style programme integrating Spotify Africa.
Measurement Design
12-Week Pilot Structure
Four weeks of pre-activation baseline data followed by eight weeks post reward activation, with an opt-in cohort measured against a non-engaged control group throughout.
Key Findings · Activity
Activity Score — Pre vs Post Activation
Average weekly activity score by cohort. Engaged members improved from their own baseline; the control group showed minimal change.
Opt-in members entered the pilot with a baseline activity score of 63 vs 49 for the control group — a 14-point gap before the reward activated. Members who chose to participate were already more active.
The post-pilot improvement in the engaged cohort (+14 points) cannot be cleanly attributed to the reward. A randomised assignment design would be required to establish causal direction.
Bias Controls
Baseline Equivalence Test
Cohort characteristics at week 1 (before any reward exposure). A well-designed experiment would show comparable baselines. These results show they do not.
Key Findings · Retention
Cohort Retention at Week 8
Monthly retention tracked by engagement tier. The engaged cohort retained at a materially higher rate by the 8-week mark — though baseline differences mean this should be read as directional, not causal.
Data Model
Warehouse-Style Star Schema
Designed for Power BI with clearly separated dimension and fact layers. All tables are generated synthetically via the Python pipeline.
member_week_pilot.csv and member_summary.csv
Product Implications
What the Analysis Suggests
Stack