If you like what we're working on, please  star us on GitHub. This enables us to continue to give back to the community.
DEEPCHECKS GLOSSARY

Catastrophic Forgetting

Machine Learning is all around us, and with so much information at our fingertips, the models we’re creating are immensely deep and complicated. Faster connections and greater computing power have resulted in an AI revolution.

Recommendation algorithms, predictions, and picture and speech recognition technologies are just a few examples of the technology that is being created on a daily basis. You’d be surprised at how many machine learning technologies are impacting your life right now.

Artificial intelligence is not perfect. They, like humans, are capable of making mistakes and forgetting. Forgetting can be disastrous in the context of NN, similar to having a severe case of amnesia.

So how do Neural Networks forget?

  • During the training phase, the neural network develops the aforementioned paths between nodes dynamically. These paths are constructed from the data provided to the machine. When fresh information is fed into it, new pathways are established, making the algorithm “forget” the prior jobs it was trained for. The error margin rises at times, while at other times, the computer entirely forgets the objective. Neural Network Catastrophic Forgetting (or Interference) is what this is.

How serious is Catastrophic Forgetting?

Currently, catastrophic forgetting in deep learning is not a serious issue because most current Neural Networks are trained with guided learning. To put it another way, the engineers carefully select the information they supply to the network to eliminate biases and other difficulties that may come from raw data.

However, when Machine Learning grows more complex, we will be able to provide our agents with independent continuous learning. Neural networks may continue to learn as they process new data without being overseen by humans.

As you may have guessed, one of the most serious concerns of autonomous learning is that we don’t know what sort of data the network is utilizing to learn. If it decides to deal with data that is too distant from its fundamental training, it may experience AI catastrophic forgetting.

So all we to do to avoid overcoming catastrophic forgetting in neural networks is avoid autonomous networks, right? Not exactly. The new work sets were not significantly different from the prior ones, however, it resulted in Interference.

Even comparable data sets might cause a Catastrophic Interference. We won’t know for sure until it happens. Hidden layers of a neural network are a bit of a black box between input and output, so we don’t know if the data will break a crucial route and trigger a failure.

Open source package for ml validation

Build Test Suites for ML Models & Data with Deepchecks

Get StartedOur GithubOur Github

How to tackle Catastrophic Forgetting?

While the possibility of Catastrophic Interference will not go away, it is a rather minor issue. There are many design solutions for reducing risk- Node Sharpening and Latent Learning.

  • Building a copy prior to re-training a network is a smart method to have protection in case anything happens from a strategic standpoint.
  • Another frequent method is to create a training neural network with all of the data at once. When fresh information interrupts what the network has already learned, the problem only occurs with sequential learning.

Catastrophic Forgetting in reinforcement learning is only one of the numerous issues that machine learning specialists are working to solve. While artificial intelligence has enormous promise, we are still discovering and testing it. Artificial or natural intelligence has never been an easy subject to solve, but we are making great efforts in better understanding it.

Machine Learning is a fascinating area, not only for its applications but also for making us examine our human nature. Consider this. The main criteria for Turing were to create a machine that could not be distinguished from a person.