Vui lòng dùng định danh này để trích dẫn hoặc liên kết đến tài liệu này:
https://elib.vku.udn.vn/handle/123456789/2729
Nhan đề: | Few-Shots Novel Space-Time View Synthesis from Consecutive Photos |
Tác giả: | Mai, Van Quan Nguyen, Duc Dung |
Từ khoá: | NeRF View synthesis Few-shot view reconstruction |
Năm xuất bản: | thá-2023 |
Nhà xuất bản: | Springer Nature |
Tóm tắt: | Despite the remarkable result of Neural Scene Flow Fields [10] in novel space-time view synthesis of dynamic scenes, the model has limited ability when a few input views are provided. To enable the few-shots novel space-time view synthesis of dynamic scenes, we propose a new approach that extends the model architecture to use shared priors learned across scenes to predict appearance and geometry at static background regions. Throughout the optimization, our network is trained to rely on the image features extracted from a few input views or from the learned knowledge for reconstructing unseen regions based on the camera view direction. We conduct multiple experiments on NVIDIA Dynamic Scenes Dataset [23] that demonstrate our approach results in a better rendering quality compared to the prior work when a few input views are available. |
Mô tả: | Lecture Notes in Networks and Systems (LNNS, volume 734); CITA: Conference on Information Technology and its Applications; pp: 240-249. |
Định danh: | https://link.springer.com/chapter/10.1007/978-3-031-36886-8_20 http://elib.vku.udn.vn/handle/123456789/2729 |
ISBN: | 978-3-031-36886-8 |
Bộ sưu tập: | CITA 2023 (International) |
Khi sử dụng các tài liệu trong Thư viện số phải tuân thủ Luật bản quyền.