A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

Quality of randomness and node dropout regularization for fitting neural networks




TekijätKoivu Aki, Kakko Joona-Pekko, Mäntyniemi Santeri, Sairanen Mikko

KustantajaPERGAMON-ELSEVIER SCIENCE LTD

Julkaisuvuosi2022

JournalExpert Systems with Applications

Tietokannassa oleva lehden nimiEXPERT SYSTEMS WITH APPLICATIONS

Lehden akronyymiEXPERT SYST APPL

Artikkelin numero 117938

Vuosikerta207

Sivujen määrä10

ISSN0957-4174

eISSN1873-6793

DOIhttps://doi.org/10.1016/j.eswa.2022.117938

Verkko-osoitehttps://doi.org/10.1016/j.eswa.2022.117938

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/176026934


Tiivistelmä
Quality of randomness in generating random numbers is an attribute manifested by a sufficiently random process, and a sufficiently large sample size. To assess it, various statistical tests for it have been proposed in the past. The application area for random number generation is wide in natural sciences, and one of the more prominent and widely adopted is machine learning, where bounded randomness or stochastic random number generation has been utilized in various tasks. The artificial neural networks used for example in deep learning use random number generation for weight initialization, optimization and in methods that aim to reduce the overfitting phenomena of these models. One of these methods include node dropout, which has been widely adopted. The method's internal logic is heavily dictated by a random number generator it utilizes. This study investigated the relationship of quality of randomness and the node dropout regularization in terms of reducing overfitting of neural networks. Our experimentation included five different random number generators, which output were tested for quality of randomness by various statistical tests. These sets of random numbers were then used to dictate the internal logic of a node dropout layer in a neural network model, in four different classification tasks. The impact of data size and relevant hyperparameters were tested, and the overall amount of overfitting, which was compared against the randomness results of a generator. The results suggest that true random number generation in node dropout can be both advantageous and disadvantageous, depending on the dataset and prediction problem at hand. These findings suggest that fitting neural networks in general can be improved by adding random number generation experimentation to modelling.

Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 17:53