You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for your work. I now want to use the model(ssdnerf_chairs_recons1v_80k_emaonly.pth) you provided to reconstruct my own data. I reconstructed a mesh from the chair image I provided below, but it seems to have some discrepancies with reality.
Therefore, I would like to test with two or four images. I have a few questions:
Do I need to use the training set to train the model weights for recons2v/recons4v, or can I use the weights for recons1v?
For my data, there is some error in the camera poses for multiple views, and they are not very accurate. However, precise camera poses are crucial for NeRF. I'm not sure if this will affect the performance of SSDNeRF.
Regarding camera intrinsic parameters, I've noticed that my camera has distortion parameters. Additionally, to achieve a (128, 128) image size, I cropped and resized my original images, which introduced some errors in the calculation of the camera intrinsic parameters. I would like to know if such camera intrinsic parameters will affect the model's performance.
When I used the downloaded weights and added my own data to the test, I obtained the following output:
However, considering that the "chairs" test dataset still yielded reasonable results, I believe this may not have an impact. I'm not sure if my understanding is correct.
If my data is not suitable, please let me know. After all, there is still a lot of noise compared to standard datasets. Looking forward to your response.
The text was updated successfully, but these errors were encountered:
Hi! It seems that there could be an issue with your reconstruction setup, say the camera pose (we use OpenCV camera convention, and some data use Blender/OpenGL conventions). Usually, even if there's a failure in the actual 3D geometry, the result should still look close to the input at least from the given perspective.
You can just use recons1v, these models are the same except for testing configuration.
We have not rigorously tested the robustness against pose error, but from my experience it should be reasonably robust. I have a GUI demo under development, and for images in the wild I just align the camera pose manually in the GUI (may be released around the end of this year or), which is not accurate but the results are still good enough most of the time.
If your camera has distortion, it would be better to rectify the image and obtain the correct intrinsics before feeding it to SSDNeRF. But again, sometimes you can set the intrinsics by manually tweaking the focal length and the model should be reasonably robust to this error.
This is not an error, I just haven't suppressed these warnings yet.
Thank you very much for your work. I now want to use the model(ssdnerf_chairs_recons1v_80k_emaonly.pth) you provided to reconstruct my own data. I reconstructed a mesh from the chair image I provided below, but it seems to have some discrepancies with reality.
Therefore, I would like to test with two or four images. I have a few questions:
However, considering that the "chairs" test dataset still yielded reasonable results, I believe this may not have an impact. I'm not sure if my understanding is correct.
If my data is not suitable, please let me know. After all, there is still a lot of noise compared to standard datasets. Looking forward to your response.
The text was updated successfully, but these errors were encountered: