📊 4.2 Exploratory Data Analysis (EDA)

Exploratory Data Analysis (EDA) is where we look before we leap. We’ll use vitamin_trial.csv to summarise, visualise, and question our data so later modelling steps are grounded in reality.


🎯 Objectives

  • Profile the dataset and assess missingness.
  • Recap distributions (with a link to 4.1 for deeper details).
  • Produce group summaries (means/medians, CIs) with tidy tables.
  • Explore relationships: scatter/regression, pair plots, correlation heatmaps (Pearson & Spearman).
  • Identify potential outliers and note next actions.
Why EDA matters EDA reveals data quality issues (missingness, outliers, coding quirks), structure (groups, clusters), and plausible transformations. It informs the rest of your analysis pipeline.
# Setup for Google Colab: Fetch datasets automatically or manually
import os
from google.colab import files

MODULE = '04_data_analysis'
DATASET = 'vitamin_trial.csv'
BASE_PATH = '/content/data-analysis-projects'
MODULE_PATH = os.path.join(BASE_PATH, 'notebooks', MODULE)
DATASET_PATH = os.path.join('data', DATASET)

try:
    print('Attempting to clone repository...')
    if not os.path.exists(BASE_PATH):
        !git clone https://github.com/ggkuhnle/data-analysis-projects.git
    print('Setting working directory...')
    os.chdir(MODULE_PATH)
    if os.path.exists(DATASET_PATH):
        print(f'Dataset found: {DATASET_PATH} ✅')
    else:
        raise FileNotFoundError('Dataset missing after clone.')
except Exception as e:
    print(f'Cloning failed: {e}')
    print('Falling back to manual upload...')
    os.makedirs('data', exist_ok=True)
    uploaded = files.upload()
    if DATASET in uploaded:
        with open(DATASET_PATH, 'wb') as f:
            f.write(uploaded[DATASET])
        print(f'Successfully uploaded {DATASET} ✅')
    else:
        raise FileNotFoundError(f'Upload failed. Please upload {DATASET}.')
%pip install -q pandas numpy matplotlib seaborn statsmodels scipy
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from scipy.stats import pearsonr, spearmanr

pd.set_option('display.max_columns', 50)
sns.set_theme()
print('EDA environment ready.')

1) Load & Quick Inspect

We start with a structural and statistical glance at the dataset.

df = pd.read_csv('data/vitamin_trial.csv')
print('Shape:', df.shape)
print('\nColumn dtypes:')
print(df.dtypes)

print('\nPeek at the first rows:')
display(df.head())

print('\nSummary (numeric):')
display(df.describe())

print('\nSummary (including categorical):')
display(df.describe(include='all'))

2) Missingness & Basic Data Quality

Before plotting, ensure we know where the gaps are. If missingness clusters by group/time, it may bias results.

print('Missing values per column:')
print(df.isna().sum().sort_values(ascending=False))

# Missingness by group (if columns exist)
if {'Group','Vitamin_D'}.issubset(df.columns):
    miss_rate = df.groupby('Group', observed=True)['Vitamin_D'].apply(lambda s: s.isna().mean())
    print('\nProportion missing in Vitamin_D by Group:')
    display(miss_rate)

3) Distributions (recap)

A brief recap using hist/KDE for Vitamin_D; detailed normality/Q–Q/log guidance is in Notebook 4.1.

x = df['Vitamin_D'].dropna()
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
sns.histplot(x, bins=20, kde=True, ax=ax)
ax.set_title('Vitamin_D (hist + KDE)')
ax.set_xlabel('Vitamin_D (µg)')
plt.tight_layout(); plt.show()
print('For Q–Q plots, Shapiro–Wilk tests, and log-transform demos, see Notebook 4.1.')

4) Grouped Descriptives (means/medians/CI)

Summarise by key factors (e.g., Group, Outcome, Time). This clarifies patterns before modelling.

Common recipes
  • Mean/median/count per group.
  • Within-group standard deviation (variability).
  • Confidence intervals for group means (basic normal approximation).
def mean_se_ci(s):
    s = pd.Series(s).dropna()
    n = s.size
    if n == 0:
        return pd.Series({'mean': np.nan, 'se': np.nan, 'low': np.nan, 'high': np.nan, 'n': 0})
    m = s.mean()
    sd = s.std(ddof=1) if n > 1 else 0.0
    se = sd / np.sqrt(n) if n > 0 else np.nan
    low = m - 1.96 * se
    high = m + 1.96 * se
    return pd.Series({'mean': m, 'se': se, 'low': low, 'high': high, 'n': n})

if {'Group','Vitamin_D'}.issubset(df.columns):
    grp = (
        df.groupby('Group', observed=True)['Vitamin_D']
          .apply(mean_se_ci)
          .reset_index()
    )
    display(grp.head(10))

    # Compact summary table of means by group
    grp_means = df.groupby('Group', observed=True)['Vitamin_D'].mean().reset_index(name='mean_VitD')
    display(grp_means)

5) Relationships (scatter/regression → pair plots → heatmaps)

Start with scatter plots and trend lines, then move to pair plots and correlation heatmaps for multi-variable overviews.

Tip Use Spearman for monotonic (rank-based) relationships and Pearson for linear relationships. Consider log transforms if right-skewed.
if 'Time' in df.columns and np.issubdtype(df['Time'].dtype, np.number):
    fig, axes = plt.subplots(1, 2, figsize=(12, 4))
    sns.regplot(data=df, x='Time', y='Vitamin_D', ax=axes[0])
    axes[0].set_title('Time vs Vitamin_D (raw)')
    axes[0].set_ylabel('Vitamin_D (µg)')

    # log(Vitamin_D) for linearising right-skew
    df_tmp = df.assign(Vitamin_D_log=np.log(df['Vitamin_D'] + 1e-6))
    sns.regplot(data=df_tmp, x='Time', y='Vitamin_D_log', ax=axes[1])
    axes[1].set_title('Time vs log(Vitamin_D)')
    axes[1].set_ylabel('log Vitamin_D')

    plt.tight_layout(); plt.show()
else:
    print('No numeric Time column found for scatter examples.')
num = df.select_dtypes(include=[np.number]).copy()
if 'Vitamin_D' in num.columns:
    num['Vitamin_D_log'] = np.log(num['Vitamin_D'] + 1e-6)

if num.shape[1] >= 2:
    # Pair plot (sample if large)
    sns.pairplot(num.sample(min(300, len(num)), random_state=1))
    plt.suptitle('Pair Plot (sampled)', y=1.02)
    plt.show()

    # Correlation heatmaps
    corr_pear = num.corr(numeric_only=True, method='pearson')
    corr_spea = num.corr(numeric_only=True, method='spearman')

    fig, axes = plt.subplots(1, 2, figsize=(12, 4))
    sns.heatmap(corr_pear, annot=True, fmt='.2f', ax=axes[0])
    axes[0].set_title('Pearson correlation')
    sns.heatmap(corr_spea, annot=True, fmt='.2f', ax=axes[1])
    axes[1].set_title('Spearman correlation')
    plt.tight_layout(); plt.show()
else:
    print('Not enough numeric variables for pair plot / heatmap.')

6) Outliers: Identify & Reflect

Boxplots flag outliers as points outside 1.5×IQR. Outliers can be data errors, rare but valid observations, or signals of subgroups. Investigate before removing.

Quick IQR rule Q1 = 25th percentile, Q3 = 75th. IQR = Q3−Q1. Outliers ≈ values < Q1−1.5×IQR or > Q3+1.5×IQR.
q1, q3 = np.percentile(df['Vitamin_D'].dropna(), [25, 75])
iqr = q3 - q1
low, high = q1 - 1.5*iqr, q3 + 1.5*iqr
mask_out = (df['Vitamin_D'] < low) | (df['Vitamin_D'] > high)
print(f'Potential outliers (IQR rule): {int(mask_out.sum())}')
display(df.loc[mask_out, ['ID','Group','Vitamin_D']].head(10))

🧪 Exercises

  1. Grouped summaries
    • Create a table of mean, median, n of Vitamin_D by Group × Outcome.
    • Which subgroup has the highest average?
  2. Relationships
    • If Time exists, compute Pearson & Spearman correlations with Vitamin_D (raw and log).
    • Interpret: linear vs monotonic patterns; do logs help?
  3. Pair plot & heatmap
    • Build a numeric subset, add Vitamin_D_log, create a pair plot (sample if needed), and correlation heatmaps.
    • Write two observations (surprising positive/negative associations?).
  4. Outliers
    • Identify outliers by IQR rule within each Group.
    • Suggest a plan (verify source, winsorise, robust stats) and justify briefly.

🔗 For visualising distributions in depth (Q–Q, Shapiro, log-transform demos), see Notebook 4.1.

✅ Conclusion

You’ve conducted a principled EDA: profiling, missingness, distributions recap, group comparisons, relationship exploration (scatter → pair plots → heatmaps), and outlier scanning. These findings should inform your next steps (transformations, model choices, and validation rules).

👉 Next: 4.3 Correlation & Association—formalise the relationships you saw here with appropriate methods and caveats.

Further reading