Physics-Informed vs Raw MLP Classifier

An interactive comparison of two machine learning approaches for projectile motion classification. Watch as a physics-informed softmax classifier competes against a raw multilayer perceptron (MLP), both training in real-time on the same seeded data to predict projectile range classes.

Live Training & Visualization

Both models train simultaneously on identical data samples. Watch trajectories, decision fields, and confusion matrices update in real-time. Adjust hyperparameters to see how physics-informed features compare against raw input features.

Physics-Informed Machine Learning

Problem Setup

Given initial velocity \(v\) and launch angle \(\theta\), classify the projectile range \(R\) into three categories:

  • Short: \(R < 100\) m (blue)
  • Medium: \(100 \le R < 200\) m (green)
  • Long: \(R \ge 200\) m (red)

Physics-Informed Approach

The range of a projectile in classical mechanics is exactly:

\[ R = \frac{v^2 \sin(2\theta)}{g} \]

We use \(\hat{R}\) as the sole engineered feature and train a linear softmax classifier on \([1, \hat{R}]\). This strong physics prior dramatically reduces sample complexity.

Raw MLP Approach

The raw model receives only basic input features:

\[ \mathbf{x} = \left[1, \frac{v}{60}, \sin\theta + \epsilon, \cos\theta + \epsilon\right] \]

A two-layer MLP with 16 hidden units (Leaky-ReLU activation) and Adam optimization must learn the \(v^2\sin(2\theta)\) relationship from data alone.

MLP Architecture

Hidden layer with Leaky-ReLU:

\[ \mathbf{h} = \text{LeakyReLU}(\mathbf{W}_1 \mathbf{x} + \mathbf{b}_1) \]

Output layer with softmax:

\[ \mathbf{z} = \mathbf{W}_2 \mathbf{h} + \mathbf{b}_2, \quad \mathbf{p} = \text{softmax}(\mathbf{z}) \]

Training uses cross-entropy loss with L2 regularization (\(\lambda=10^{-4}\)) and Adam optimization (\(\beta_1=0.9\), \(\beta_2=0.999\)).

Training Details

  • Dataset: 600 samples (70% train, 30% validation) with seeded RNG (Mulberry32)
  • Features: Physics model: \([1,\hat{R}]\); Raw MLP: \([1, v/60, \sin\theta, \cos\theta]\)
  • Noise: Small Gaussian noise (\(\sigma=0.02\)) on raw features
  • Optimization: SGD (lr×0.5) for physics; Adam (lr) for MLP
  • Batch size: Configurable (default 64)

Real-Time Visualizations

  • Dual trajectories: Same projectiles, colored by each model's predictions
  • Decision fields: 2D \((v,\theta)\) heatmaps showing class boundaries
  • Confusion matrices: Per-class accuracy on validation set
  • Live metrics: EMA training accuracy, validation accuracy, and loss history

Key Observations

  • Sample efficiency: Physics model achieves >90% accuracy within seconds; MLP requires more exploration
  • Decision boundaries: Physics model learns parabolic iso-\(R\) contours exactly; MLP approximates them
  • Interpretability: Physics features map directly to the range formula; MLP weights are opaque
  • Generalization: Both achieve ~95%+ validation accuracy, but physics model is more stable to hyperparameters

Experimental Controls

  • Steps/tick: Number of gradient updates per frame (1–200)
  • Learning rate: Step size for optimization (0.01–1.0)
  • Batch size: Mini-batch size for stochastic gradient descent (8–256)
  • Tick interval: Milliseconds between training ticks (16–250 ms)
  • Seed: RNG seed for reproducible experiments—change it to explore dataset variations

Implementation Notes

  • Numerical integration: Semi-implicit Euler for projectile physics with timestep clamping
  • Decision field rendering: Dense grid evaluation (step=6 px) with ImageData API
  • Confusion matrix: Validation set evaluated each tick; heatmap opacity scales with counts
  • Performance: All computation client-side in vanilla JavaScript; training loop non-blocking

Educational Impact

This interactive demo highlights the fundamental tradeoff in machine learning: domain knowledge (physics-informed features) versus model flexibility (raw neural networks). Watch both approaches converge to high accuracy, but observe how physics priors accelerate learning and improve interpretability.