“In the context of machine learning, Metalearning is the process of learning to learn. A meta-learning algorithm uses the experience to change certain aspects of a learning algorithm, or the learning method itself, such that the modified learner is better than the original learner at learning from additional experience.”
Meta-learning is an exciting trend of research in the field of machine-learning currently and the Artificial Intelligence community which tackles the problem of learning to learn. Meta-Learning is simply defined as the ability to acquire knowledge versatility.
Types of Meta-Learning Models
Meta-Learning models follow different techniques. You will find some of the meta-learning models to be focused on optimizing neural network structures while the others are focused on finding the right dataset in order to train on the specific models.
We expect from good meta-learning models to be able to adapt or generalize new tasks and new environment which have never been encountered during the time of training. The adaptation process takes place during the test but has limited exposure to the new task configurations. And eventually, the model that is adapted can complete the new tasks. This is the reason why meta-learning is also known as learning to learn.
Here are some of the Meta-Learning Models:
Few Shots Meta-Learning
Few Shots Meta-Learning is a model which creates deep neural networks that have the ability to learn from minimalistic datasets mimicking.
Optimizer meta-learning models are focused on learning how to optimize a neural network so that you can accomplish the task better. These models embody a neural network that applies completely different optimizations to the hyperparameters of another neural network so as to boost a targeted task.
The objectives of metric meta-learning are to determine a metric space in which learning is particularly efficient.
Recurrent Model Meta-Learning
Recurrent Model Meta-Learning is tailored to recurrent neural networks (RNNs) such as Long-Short-Term-Memory (LSTM). In this architecture, the meta-learner algorithm will train an RNN model will process a dataset sequentially and then process new inputs from the task. If you take an image classification setting
In an image classification setting, this might involve passing in the set of (image, label) pairs of a dataset sequentially, followed by new examples which must be classified.
Meta-learning opportunities arise in many different ways which can be embraced with the help of the wide spectrum of learning techniques.