Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some reconstructed results #5

Open
ethyd4 opened this issue Sep 4, 2023 · 5 comments
Open

Some reconstructed results #5

ethyd4 opened this issue Sep 4, 2023 · 5 comments

Comments

@ethyd4
Copy link

ethyd4 commented Sep 4, 2023

Hello mam,
While, Scanning the small objects it was showing some incorrect reconstruct results. Some of the reconstructed results were
Screenshot from 2023-08-21 15-42-35
Screenshot from 2023-08-28 12-12-14
Screenshot from 2023-08-28 10-21-51
A_hat1_color_frame0 (2)
A_hat1_color_frame0 (1)
A_hat1_color_frame0

@ethyd4
Copy link
Author

ethyd4 commented Sep 4, 2023

If I'm scanning any of the medium objects like human , chair , Tins. For these kind of objects I'm getting some decent results. When If I scan small objects it was not giving the proper results.
can you give some suggestions to improve the results.

@Ritchizh
Copy link
Owner

Ritchizh commented Sep 4, 2023

Hi!

  1. RealSense depth camera has 1-6 cm depth resolution (depending on distance), so probably the camera's accuracy is not enough for finer small models?
  2. Try filtering out clutter more accurately - remove everything which is not the object from the point cloud.
  3. Try stricter convergence criterion, by reducing RANSACConvergenceCriteria parameters.

@ethyd4
Copy link
Author

ethyd4 commented Sep 5, 2023

Hello mam,
In the point cloud data for each frame we were getting some amount of incorrect data to remove that If I use statistical , radius outliers removal methods at that time I'm loosing the good data also.

For Intel real sense D415 the capturing limits were 0.3m to 10m. It needs to get all the details within that distance. I think there was no issue with the camera.

can you suggest a better way to remove the incorrect data from the point cloud data.

@Ritchizh
Copy link
Owner

Ritchizh commented Sep 5, 2023

Statistical and radius outliers removal methods are effective. You can also use a bounding box to cut your object from the point cloud as I have shown in the example: bounds = [[-np.inf, np.inf], [-np.inf, 0.15], [0, 0.56]] # set the bounds

Your distance range should be OK for D415, however, what I meant - the distance error is +/- 2%, you won't see all fine texture details in the shape.

Unfortunately, I don't know the reason why your reconstructed shapes look so distorted. Are only the final meshes distorted or the Ransac merged point clouds too?

@ethyd4
Copy link
Author

ethyd4 commented Sep 12, 2023

We are getting good results for objects bigger than 1ft or 30 cm in measurement. This might be because of sensor resolution.We were thinking of using Lidar instead of depth camera, we plan on going with intel realsense 515l maybe that will give us better result for small objects. Lidar directly gives pointcloud data as well as RGB, will the algorithm you designed work with intel realsense 515L lidar camera, and what changes we need to make for that. A request, can you try scanning small objects, or else we can provide the dataset for the same. Please let us know where we are going wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants