Number of iterations as a hyper-parameter in neural network [closed]












-1















How to determine what's the optimal number of iterations in learning a neural network?










share|improve this question













closed as too broad by Carcigenicate, Matias Valdenegro, tripleee, gnat, desertnaut Jan 12 at 14:16


Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.























    -1















    How to determine what's the optimal number of iterations in learning a neural network?










    share|improve this question













    closed as too broad by Carcigenicate, Matias Valdenegro, tripleee, gnat, desertnaut Jan 12 at 14:16


    Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.





















      -1












      -1








      -1








      How to determine what's the optimal number of iterations in learning a neural network?










      share|improve this question














      How to determine what's the optimal number of iterations in learning a neural network?







      python neural-network






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 1 at 18:24









      AvocanoAvocano

      367




      367




      closed as too broad by Carcigenicate, Matias Valdenegro, tripleee, gnat, desertnaut Jan 12 at 14:16


      Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.









      closed as too broad by Carcigenicate, Matias Valdenegro, tripleee, gnat, desertnaut Jan 12 at 14:16


      Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.


























          1 Answer
          1






          active

          oldest

          votes


















          2














          One way of doing it is to split your training data into a train and validation set. During training, the error on the training set should decrease steadily. The error on the validation set will decrease and at some point start to increase again. At this point the net starts to overfit to the training data. What that means is that the model adapts to the random variations in the data rather than learning the true regularities. You should retain the model with overall lowest validation error. This is called Early Stopping.
          Alternatively, you can use Dropout. With a high enough Dropout probability, you can essentially train for as long as you want, and overfitting will not be a significant issue.






          share|improve this answer
































            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            One way of doing it is to split your training data into a train and validation set. During training, the error on the training set should decrease steadily. The error on the validation set will decrease and at some point start to increase again. At this point the net starts to overfit to the training data. What that means is that the model adapts to the random variations in the data rather than learning the true regularities. You should retain the model with overall lowest validation error. This is called Early Stopping.
            Alternatively, you can use Dropout. With a high enough Dropout probability, you can essentially train for as long as you want, and overfitting will not be a significant issue.






            share|improve this answer






























              2














              One way of doing it is to split your training data into a train and validation set. During training, the error on the training set should decrease steadily. The error on the validation set will decrease and at some point start to increase again. At this point the net starts to overfit to the training data. What that means is that the model adapts to the random variations in the data rather than learning the true regularities. You should retain the model with overall lowest validation error. This is called Early Stopping.
              Alternatively, you can use Dropout. With a high enough Dropout probability, you can essentially train for as long as you want, and overfitting will not be a significant issue.






              share|improve this answer




























                2












                2








                2







                One way of doing it is to split your training data into a train and validation set. During training, the error on the training set should decrease steadily. The error on the validation set will decrease and at some point start to increase again. At this point the net starts to overfit to the training data. What that means is that the model adapts to the random variations in the data rather than learning the true regularities. You should retain the model with overall lowest validation error. This is called Early Stopping.
                Alternatively, you can use Dropout. With a high enough Dropout probability, you can essentially train for as long as you want, and overfitting will not be a significant issue.






                share|improve this answer















                One way of doing it is to split your training data into a train and validation set. During training, the error on the training set should decrease steadily. The error on the validation set will decrease and at some point start to increase again. At this point the net starts to overfit to the training data. What that means is that the model adapts to the random variations in the data rather than learning the true regularities. You should retain the model with overall lowest validation error. This is called Early Stopping.
                Alternatively, you can use Dropout. With a high enough Dropout probability, you can essentially train for as long as you want, and overfitting will not be a significant issue.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jan 1 at 18:51

























                answered Jan 1 at 18:28









                braaterAfrikaanerbraaterAfrikaaner

                1,062517




                1,062517

















                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    How to fix TextFormField cause rebuild widget in Flutter

                    Npm cannot find a required file even through it is in the searched directory