Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/2729
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mai, Van Quan | - |
dc.contributor.author | Nguyen, Duc Dung | - |
dc.date.accessioned | 2023-09-26T01:45:36Z | - |
dc.date.available | 2023-09-26T01:45:36Z | - |
dc.date.issued | 2023-07 | - |
dc.identifier.isbn | 978-3-031-36886-8 | - |
dc.identifier.uri | https://link.springer.com/chapter/10.1007/978-3-031-36886-8_20 | - |
dc.identifier.uri | http://elib.vku.udn.vn/handle/123456789/2729 | - |
dc.description | Lecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 240-249. | vi_VN |
dc.description.abstract | Despite the remarkable result of Neural Scene Flow Fields [10] in novel space-time view synthesis of dynamic scenes, the model has limited ability when a few input views are provided. To enable the few-shots novel space-time view synthesis of dynamic scenes, we propose a new approach that extends the model architecture to use shared priors learned across scenes to predict appearance and geometry at static background regions. Throughout the optimization, our network is trained to rely on the image features extracted from a few input views or from the learned knowledge for reconstructing unseen regions based on the camera view direction. We conduct multiple experiments on NVIDIA Dynamic Scenes Dataset [23] that demonstrate our approach results in a better rendering quality compared to the prior work when a few input views are available. | vi_VN |
dc.language.iso | en | vi_VN |
dc.publisher | Springer Nature | vi_VN |
dc.subject | NeRF | vi_VN |
dc.subject | View synthesis | vi_VN |
dc.subject | Few-shot view reconstruction | vi_VN |
dc.title | Few-Shots Novel Space-Time View Synthesis from Consecutive Photos | vi_VN |
dc.type | Working Paper | vi_VN |
Appears in Collections: | CITA 2023 (International) |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.