Activity 1

"Too Many Numbers"

StockReturn (%)Volatility (%)Volume (M)P/EBeta
AAPL122280281.2
MSFT152565321.3
JPM81845121.1
GS102830101.4
JNJ61225180.7
PFE71535150.8

Your Task

  1. Can you visualize all 5 dimensions on paper?
  2. Which features seem to move together (correlated)?
  3. If you could keep only 2 summary features, what would they capture?
Reveal Solution

With 5 features, direct visualization is impossible. But notice: Return and Beta move together, Volatility and Beta move together. PCA finds the best linear combinations that capture the most variance. The first principal component might be a "risk" factor, the second a "size" factor.

Activity 2

"Find the Direction of Maximum Spread"

2D data cloud with principal component arrows showing directions of maximum variance

Your Task

  1. If you had to summarize this 2D cloud with one line, where would you draw it?
  2. Why along the widest spread?
  3. What does the second arrow capture?
Reveal Solution

The first principal component (PC1) points in the direction of maximum variance. Projecting onto this direction loses the least information. PC2 is perpendicular to PC1 and captures the remaining variance. Together they form a new coordinate system aligned with the data's natural axes.

Activity 3

"How Many Components?"

Scree plot showing explained variance ratio for each principal component

Your Task

  1. How much variance does the first component capture?
  2. At which component does adding more stop helping much?
  3. If you keep 3 components, roughly what percentage of total variance do you retain?
Reveal Solution

The scree plot shows explained variance per component. Look for the "elbow" -- where the curve flattens. Components after the elbow add little. A common rule: keep enough components to explain 90-95% of total variance. This is dimensionality reduction -- fewer features, nearly the same information.

Activity 4

"Compress and Reconstruct"

Original data versus PCA reconstruction showing information loss with fewer components

Your Task

  1. What information was lost when reducing to fewer components?
  2. Is the reconstruction close to the original?
  3. When is a lossy approximation acceptable?
Reveal Solution

PCA reconstruction: $\hat{X} = X_{\text{reduced}} \cdot W^T$. With fewer components, fine details are lost but major patterns are preserved. Acceptable when: the lost variance is noise, or you need speed/storage savings. This is similar to JPEG compression -- lose some detail, keep the essence.

Activity 5

"The Map of Similarities"

t-SNE visualizations of the same data with different perplexity settings

Your Task

  1. Which points are clustered together?
  2. Does the distance between clusters have meaning?
  3. The same data is shown 3 times with different settings -- what changed?
Reveal Solution

t-SNE is a nonlinear method that preserves local neighborhoods -- similar points stay close. But unlike PCA, distances between distant clusters are NOT meaningful. The perplexity parameter controls how many neighbors each point considers: low perplexity = tight clusters, high perplexity = more global structure.

Activity 6

"PCA vs. t-SNE: Which to Use?"

Side-by-side comparison of PCA and t-SNE projections of the same dataset

Your Task

  1. Which visualization preserves global distances better?
  2. Which shows clusters more clearly?
  3. Can you apply t-SNE to new data without recomputing everything?
Reveal Solution

PCA: linear, fast, invertible, preserves global structure -- use for feature reduction, preprocessing, or when you need to transform new data. t-SNE: nonlinear, slow, not invertible, reveals clusters -- use for visualization only. You cannot apply a t-SNE mapping to new points without rerunning on the full dataset.