BROWSE

Related Scientist

cnir's photo.

cnir
뇌과학이미징연구단
more info

ITEM VIEW & DOWNLOAD

Improving Neural Radiance Fields Using Near-Surface Sampling with Point Cloud Generation

DC Field Value Language
dc.contributor.authorYoo, Hye Bin-
dc.contributor.authorHan, Hyun Min-
dc.contributor.authorHwang, Sung Soo-
dc.contributor.authorIl Yong Chun-
dc.date.accessioned2024-08-07T09:30:02Z-
dc.date.available2024-08-07T09:30:02Z-
dc.date.created2024-07-29-
dc.date.issued2024-07-
dc.identifier.issn1370-4621-
dc.identifier.urihttps://pr.ibs.re.kr/handle/8788114/15473-
dc.description.abstractNeural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and performs sampling only near the estimated surface. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and three different state-of-the-art NeRF methods. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework. © The Author(s) 2024.-
dc.language영어-
dc.publisherSpringer-
dc.titleImproving Neural Radiance Fields Using Near-Surface Sampling with Point Cloud Generation-
dc.typeArticle-
dc.type.rimsART-
dc.identifier.wosid001274048100001-
dc.identifier.scopusid2-s2.0-85199134645-
dc.identifier.rimsid83683-
dc.contributor.affiliatedAuthorIl Yong Chun-
dc.identifier.doi10.1007/s11063-024-11654-5-
dc.identifier.bibliographicCitationNeural Processing Letters, v.56, no.4-
dc.relation.isPartOfNeural Processing Letters-
dc.citation.titleNeural Processing Letters-
dc.citation.volume56-
dc.citation.number4-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.subject.keywordAuthorDepth image-
dc.subject.keywordAuthorNear-surface sampling-
dc.subject.keywordAuthorNeural radiance field (NeRF)-
dc.subject.keywordAuthorNeural rendering-
dc.subject.keywordAuthorPoint cloud-
dc.subject.keywordAuthorThree-dimensional geometry-
Appears in Collections:
Center for Neuroscience Imaging Research (뇌과학 이미징 연구단) > 1. Journal Papers (저널논문)
Files in This Item:
There are no files associated with this item.

qrcode

  • facebook

    twitter

  • Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
해당 아이템을 이메일로 공유하기 원하시면 인증을 거치시기 바랍니다.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse