Skip to content Skip to sidebar Skip to footer

Val_loss Did Not Improve From Inf + Loss:nan Error While Training

I have a problem that occurs when I start training my model. This error says that val_loss did not improve from inf and loss: nan. At the beginning I thought it was because of the

Solution 1:

Few Comments...

In these kind of situation, the most preferable is the trial and error approach. It seems like your parameters have diverged while training. Lots of possibilities could be the issue. Also, it seems like you are regularizing your network as well (dropouts, BatchNorm, etc)

Suggestions:

  • Normalize your input data before feeding into the network
  • Comment out/remove all the dropouts (regularization)/kernel_initializer(use default initialization)/Early stopping etc that you're using from your preference, and let the network be a plain CNN network with just conv layer, pooling, batchnorm, and dense layer. If you see improvements, then start uncommenting one by one and you'll understand what was causing the problem.
  • Try using larger units in the dense layer like 1000 for example, as the dense layer extracts everything (features of the image) the CNN layers have compressed.

Post a Comment for "Val_loss Did Not Improve From Inf + Loss:nan Error While Training"