Meta-Learning: Learning to Learn Fast
The meta-objective depends on a mini batch and the loss L(1) is calculated using different data batches. MAML relies on second derivatives, but First-Order MAML (FOMAML) omits them for a simplified implementation. Reptile is another simple algorithm that relies on gradient descent and model-agnostic optimization. Both MAML and Reptile aim to optimize task performance and generalization by approximating the gradient update using first three leading terms.