Lesson 10.3: Forward Propagation & Backpropagation (Conceptual)
🔹 Forward Propagation
Forward propagation is the process where input data passes through the network to produce an output.
-
Inputs are multiplied by weights.
-
Bias is added.
-
The result passes through an activation function.
-
Output is generated at the final layer.
Mathematical Representation:
a[l]=activation(W[l]⋅a[l−1]+b[l])a^{[l]} = activation(W^{[l]} \cdot a^{[l-1]} + b^{[l]})
-
a[l−1]a^{[l-1]} → Activation from previous layer
-
W[l]W^{[l]}, b[l]b^{[l]} → Weights and bias for current layer
🔹 Loss Function
-
Measures difference between predicted output and actual target.
-
Examples: Mean Squared Error (MSE) for regression, Cross-Entropy Loss for classification.
🔹 Backpropagation
Backpropagation is the process of updating weights to minimize the loss.
-
Compute gradient of loss w.r.t weights using chain rule.
-
Propagate errors backward through the network.
-
Update weights using gradient descent:
W=W−η⋅∂Loss∂WW = W – \eta \cdot \frac{\partial Loss}{\partial W}
-
η\eta → Learning rate
🔹 Example (Conceptual)
-
Forward → Compute predicted output ypredy_{pred}
-
Loss → Loss(ytrue,ypred)Loss(y_{true}, y_{pred})
-
Backward → Adjust weights to reduce loss
🔹 Advantages
-
Enables neural networks to learn complex patterns.
-
Fundamental for training deep networks.
🔹 Limitations
-
Can be computationally intensive for deep networks.
-
Requires careful learning rate tuning.
✅ Quick Recap:
-
Forward Propagation → Compute output from input.
-
Loss → Measure prediction error.
-
Backpropagation → Update weights to minimize error.
