Perceptron Playground
Visualize how perceptrons learn linear decision boundaries
Key Concepts
- Linear Separability
- Data that can be separated by a straight line (hyperplane).
- Decision Boundary
- The line w₁x + w₂y + b = 0 that divides the two classes.
- Learning Rate α
- Controls step size during weight updates. Too high causes oscillation.
- Convergence
- Guaranteed for linearly separable data within finite iterations.
Experiments
- 1. Linear data: Create two clearly separated clusters. The perceptron should find a boundary quickly.
- 2. XOR pattern: Place points at corners: (−1,−1), (1,1) as one class and (−1,1), (1,−1) as another. Observe non-convergence.
- 3. Margin effects: Create data with different margins. Wider margins lead to more stable boundaries.
- 4. Learning rates: Compare α=0.01 vs α=1.0. High rates may overshoot optimal weights.
Historical Context
Frank Rosenblatt introduced the perceptron in 1957 at Cornell Aeronautical Laboratory. Initially hyped as a breakthrough toward artificial intelligence, it faced criticism after Minsky and Papert (1969) proved its limitation: inability to learn XOR.
This limitation sparked the first "AI winter" until the 1980s when multi-layer perceptrons with backpropagation overcame these constraints, leading to modern deep learning.