dashboard-executive.png

<aside>

📌 Type: Product Analytics · A/B Testing · Statistical Analysis

🛠 Tools: Python · scipy · pandas · Power BI · GitHub 📅 Date: April 2026

</aside>

<aside>

📊 Explore the Project

Choose how you’d like to explore the analysis:

🔎 Quick View (No Setup Required)

View the full dashboard instantly:

📄 Dashboard PDF: https://drive.google.com/file/d/1fC7xvPO7qkcicEoePayBq5JErq9Wjg_N/view?usp=sharing

💻 Interactive Dashboard (Power BI)

Download and explore filters, segments, and metrics:

📥 Download .pbix File: https://drive.google.com/file/d/1uTDF4gmt2eBWq7VLDVz7wvWJI9ZE8Fr7/view?usp=sharing

🧠 Complete Code & Analysis

Access Python scripts, datasets, and full documentation:

🔗 GitHub Repository: https://github.com/promptedbyesha/product-feature-adoption-ab-test-analysis

</aside>

<aside>

🔴 The Problem

Product teams ship features without knowing if they actually work. Rolling out to all users without evidence wastes engineering effort and risks hurting the experience for segments the feature doesn't help. This project simulates what a rigorous pre-rollout experiment looks like, from hypothesis design through statistical validation to a final ship/iterate recommendation.

</aside>


🗃️ What I Built

<aside>

A complete end-to-end A/B test analysis pipeline:

Experiment design — wrote the hypothesis, defined success metrics, calculated minimum detectable effect and required sample size before touching any data.

Dataset simulation — generated 5,000 realistic user records in Python with controlled retention rates, realistic funnel drop-offs, and segment-level behavior differences baked in.

Statistical testing — ran chi-square tests of proportions with 95% confidence intervals on three segments: all users, new users, and returning users.

Segmentation — broke results down by user type, device, and region to find where the feature actually works versus where it doesn't.

Funnel analysis — tracked users from feature exposure through click, use, and Day-7 retention to understand the adoption path.

Power BI dashboard — built a 4-page interactive dashboard presenting the full analysis for both technical and non-technical audiences.

Decision memo — delivered a one-page ship/iterate recommendation backed by statistical evidence.

</aside>


📈 Key Results

Segment Control Test Lift p-value Significant
All users 38.0% 43.4% +14.2% 0.0001 ✓ Yes
New users 29.5% 36.5% +23.7% 0.000046 ✓ Yes
Returning users 50.8% 53.8% +5.9% 0.179 ✗ No

💡 The Insight That Changed the Recommendation