BROWSE

Related Scientist

's photo.

수리및계산과학연구단
more info

ITEM VIEW & DOWNLOAD

α-stable convergence of heavy-/light-Tailed infinitely wide neural networks

DC Field Value Language
dc.contributor.authorJung, Paul-
dc.contributor.authorLee, Hoil-
dc.contributor.authorLee, Jiho-
dc.contributor.authorHongseok Yang-
dc.date.accessioned2024-01-16T22:00:22Z-
dc.date.available2024-01-16T22:00:22Z-
dc.date.created2023-08-02-
dc.date.issued2023-12-
dc.identifier.issn0001-8678-
dc.identifier.urihttps://pr.ibs.re.kr/handle/8788114/14629-
dc.description.abstractWe consider infinitely wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks. We assume that, for each layer, the weights of an MLP are initialized with independent and identically distributed (i.i.d.) samples from either a light-Tailed (finite-variance) or a heavy-Tailed distribution in the domain of attraction of a symmetric <![CDATA[ $\alpha$ ]]>-stable distribution, where <![CDATA[ $\alpha\in(0,2]$ ]]> may depend on the layer. For the bias terms of the layer, we assume i.i.d. initializations with a symmetric <![CDATA[ $\alpha$ ]]>-stable distribution having the same <![CDATA[ $\alpha$ ]]> parameter as that layer. Non-stable heavy-Tailed weight distributions are important since they have been empirically seen to emerge in trained deep neural nets such as the ResNet and VGG series, and proven to naturally arise via stochastic gradient descent. The introduction of heavy-Tailed weights broadens the class of priors in Bayesian neural networks. In this work we extend a recent result of Favaro, Fortini, and Peluchetti (2020) to show that the vector of pre-Activation values at all nodes of a given hidden layer converges in the limit, under a suitable scaling, to a vector of i.i.d. random variables with symmetric <![CDATA[ $\alpha$ ]]>-stable distributions, <![CDATA[ $\alpha\in(0,2]$ ]]>. © The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust.-
dc.language영어-
dc.publisherCambridge University Press-
dc.titleα-stable convergence of heavy-/light-Tailed infinitely wide neural networks-
dc.typeArticle-
dc.type.rimsART-
dc.identifier.wosid001168005000006-
dc.identifier.scopusid2-s2.0-85165334144-
dc.identifier.rimsid81390-
dc.contributor.affiliatedAuthorHongseok Yang-
dc.identifier.doi10.1017/apr.2023.3-
dc.identifier.bibliographicCitationAdvances in Applied Probability, v.55, no.4, pp.1415 - 1441-
dc.relation.isPartOfAdvances in Applied Probability-
dc.citation.titleAdvances in Applied Probability-
dc.citation.volume55-
dc.citation.number4-
dc.citation.startPage1415-
dc.citation.endPage1441-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordAuthorHeavy-Tailed distribution-
dc.subject.keywordAuthorinfinite-width limit-
dc.subject.keywordAuthorKeywords:-
dc.subject.keywordAuthormulti-layer perceptrons-
dc.subject.keywordAuthorstable process-
dc.subject.keywordAuthorweak convergence-
Appears in Collections:
Pioneer Research Center for Mathematical and Computational Sciences(수리 및 계산과학 연구단) > 1. Journal Papers (저널논문)
Files in This Item:
There are no files associated with this item.

qrcode

  • facebook

    twitter

  • Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
해당 아이템을 이메일로 공유하기 원하시면 인증을 거치시기 바랍니다.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse