BROWSE

Related Scientist

's photo.

수리및계산과학연구단
more info

ITEM VIEW & DOWNLOAD

Deep Neural Networks with Dependent Weights: Gaussian Process Mixture Limit, Heavy Tails, Sparsity and Compressibility

DC Field Value Language
dc.contributor.authorLee, Hoil-
dc.contributor.authorAyed, Fadhel-
dc.contributor.authorJung, Paul-
dc.contributor.authorLee, Juho-
dc.contributor.authorHongseok Yang-
dc.contributor.authorCaron, Francois-
dc.date.accessioned2024-01-10T22:00:20Z-
dc.date.available2024-01-10T22:00:20Z-
dc.date.created2023-12-18-
dc.date.issued2023-09-
dc.identifier.issn1532-4435-
dc.identifier.urihttps://pr.ibs.re.kr/handle/8788114/14545-
dc.description.abstractThis article studies the infinite-width limit of deep feedforward neural networks whose weights are dependent, and modelled via a mixture of Gaussian distributions. Each hidden node of the network is assigned a nonnegative random variable that controls the variance of the outgoing weights of that node. We make minimal assumptions on these per-no de random variables: they are iid and their sum, in each layer, converges to some finite random variable in the infinite-width limit. Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a Levy measure on the positive reals. If the scalar parameters are strictly positive and the Levy measures are trivial at all hidden layers, then one recovers the classical Gaussian process (GP) limit, obtained with iid Gaussian weights. More interestingly, if the Levy measure of at least one layer is non-trivial, we obtain a mixture of Gaussian processes (MoGP) in the large-width limit. The behaviour of the neural network in this regime is very different from the GP regime. One obtains correlated outputs, with non-Gaussian distributions, possibly with heavy tails. Additionally, we show that, in this regime, the weights are compressible, and some nodes have asymptotically non-negligible contributions, therefore representing important hidden features. Many sparsity-promoting neural network models can be recast as special cases of our approach, and we discuss their infinite-width limits; we also present an asymptotic analysis of the pruning error. We illustrate some of the benefits of the MoGP regime over the GP regime in terms of representation learning and compressibility on simulated, MNIST and Fashion MNIST datasets.-
dc.language영어-
dc.publisherMICROTOME PUBL-
dc.titleDeep Neural Networks with Dependent Weights: Gaussian Process Mixture Limit, Heavy Tails, Sparsity and Compressibility-
dc.typeArticle-
dc.type.rimsART-
dc.identifier.wosid001111575000001-
dc.identifier.rimsid82269-
dc.contributor.affiliatedAuthorHongseok Yang-
dc.identifier.bibliographicCitationJOURNAL OF MACHINE LEARNING RESEARCH, v.24, pp.1 - 78-
dc.relation.isPartOfJOURNAL OF MACHINE LEARNING RESEARCH-
dc.citation.titleJOURNAL OF MACHINE LEARNING RESEARCH-
dc.citation.volume24-
dc.citation.startPage1-
dc.citation.endPage78-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.relation.journalResearchAreaAutomation & Control Systems-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryAutomation & Control Systems-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.subject.keywordPlusDISTRIBUTIONS-
dc.subject.keywordPlusREGRESSION-
dc.subject.keywordPlusMODELS-
Appears in Collections:
Pioneer Research Center for Mathematical and Computational Sciences(수리 및 계산과학 연구단) > 1. Journal Papers (저널논문)
Files in This Item:
There are no files associated with this item.

qrcode

  • facebook

    twitter

  • Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
해당 아이템을 이메일로 공유하기 원하시면 인증을 거치시기 바랍니다.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse