-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dealing with nan in ..._filt_<filter>.trc causing condition 'convertOk' not met during kinematics #143
Comments
Issue: #114: fill_large_gaps_with = 'last_value' # 'last_value', 'nan', or 'zeros' |
Hi, You need at least 2 cameras to triangulate the coordinates. When there is a hole for a few frames you can interpolate it, but if a point is not seen for the whole sequence, it is not going to work. Here are a few ideas, by order of hackiness:
Please tell me how it goes! |
Hi, <IKMarkerTask name="RSmallToe">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<!-- <apply>true</apply> -->
<apply>false</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>1</weight>
</IKMarkerTask> in With In Opensim there are some issues with it as maybe the number of Cameras may not be enough or the alignment of the two used Cameras is not properly. The person is not really standing nor doing what it did in the Video. In OpenCap there were no problems with two cameras. |
There are definitely issues in your triangulated trc files. How did you calibrate? |
Hi, You did not specify which issues there are in the trc files. |
Check this thread starting from the linked comment maybe. It seems like if you download synchronized videos from OpenCap, they are exported at half the resolution, which leads to a faulty calibration EDIT: Sorry I had not seen your last message, let me read it |
The issues are that the trc files look don't look like a human being doing anything 😅 Synchronization is likely to be another issue indeed. Check the appropriate doc section to make sure it is correct: https://github.com/perfanalytics/pose2sim/blob/main/README.md#synchronization
|
I think in calibration.py you should check the resolution of all the videos and their frames per second. Got new camera. I'll try my final setting tomorrow. Synchronization: |
Calibration does check the resolution of videos, I just tried to make a guess based on the information your provided but I went in the wrong direction. To be clearer, if you capture with OpenCap and convert its calibration files, and the export the wrong videos, you will have a discrepancy between the resolutions of the calibration file and the videos. But that's not what you did. The framerate is also detected automatically, see Config.toml: Synchronization should be deterministic if you run on the same videos with the same parameters. If the right wrist clearly raises only at one time, it should not give you such different results, unless there is a problem in the parameters of your Config.toml file. If you want to send some of your data to [email protected], I can look at it and search for the bug. |
What does Pose2Sim.personAssociation() do? \personAssociation.py:588: RuntimeWarning: Mean of empty slice mean_error_px = np.around(np.nanmean(error), decimals=1) --> Mean reprojection error for Neck point on all frames is nan px, which roughly corresponds to nan mm. --> In average, 0.01 cameras had to be excluded to reach the demanded 20 px error threshold after excluding points with likelihood below 0.3. Tracked json files are stored in ... Associating persons took 00h00m02s. Then when running triangulation I get: ... raise Exception('No persons have been triangulated. Please check your calibration and your synchronization, or the triangulation parameters in Config.toml.') Exception: No persons have been triangulated. Please check your calibration and your synchronization, or the triangulation parameters in Config.toml. |
personAssociation sorts out people to ensure that you triangulate the same person across cameras rather than trying to triangulate two different persons together (in single-person mode, it removes people in the background). If you only have one person in the scene, you do not need to run it. This error still means that either your calibration or your synchronization is not good. Did you try trimming the video by hand to make sure the problem was not about synchronization? |
Before I had one Webcam vertical and one Smartphone horizontal because of DroidCam-Setting. Now: It seems that also personAssociation (and perhaps triangulation) need some minimal video length to work properly (in addition to synchronization). Perhaps you should display a warning or even an error. Switched temporarily to lightweight for my local computer. What means 0.03 excluded cameras? 3% of something? |
Hi again, |
I am getting the same error as you when trying to process the demo videos on the OpenCap page. I have downloaded the raw videos (.mov) and it does all the steps correctly, but when I try to calculate the inverse kinematics, it returns this error:
The only video from which I have been able to calculate the inverse kinematics is the one for the activity '1legjumpdroplanding' . All the others have given me the same error. |
Hi, in my video cam01.mp4 with the script below I can see that if I use 9 times each 30th frame I get different rvecs values than if I do it only one, two or three times. If you use cv2.drawChessboardCorners(img, CHECKERBOARD, corners2, ret) usually the chessboard is red to ... in colors. At some point you can see that the Chessboard is ... to red, so 180 degress rotated or something like that. Perhaps this helps finding the problem. Btw.: Could not execute "caliscope" after pip installing it. How large must the Charuco board be for Freemocap? |
Hi all, sorry my job is keeping me quite busy, but I'll try to answer by the end of the week! |
Hi Jens and Mario, @medpar, It seems like there is something shady with OpenCap data that I can't figure out there. Would you mind sending me your full project folders, including the calibration files and/or videos?). I suppose there must be some Nan in the TRC file, which would be caused by a bad triangulation. Are results for "1legjumpdroplanding" good? @jtoerber I just tried the data you are sent me.
|
Hi, there is a misunderstanding. I do not use Opencap. Hence Calibration is done by Pose2Sim. And: Would you please so kind and remove the Image in your last Post. Thanks in advance. |
Sure, sorry for the image! Okay, I may have misunderstood concerning OpenCap then. In this case, you just need to flip Y and Z coordinates and make Z=-Z in your configuration file and you should have the person upright! So by order of impact, you should:
|
This issue has been marked as stale. Do not hesitate to reopen it if needed! |
Hi,
(and some transformation of coordinates) I can see my capturing in OpenSim, but: When using OpenCap we had no problem with 2 cameras. So does the Pose2Sim triangulation need at least 3 cameras to work properly? |
@davidpagnon There is no Reopen issue button so it seems I have not the right to do so. |
Hi, |
Hi, Regarding different cameras: So how am I able to calibrate multiple (>3) cameras lets say pointing/looking to the centre of a circle? |
Hi, During Pose-Estimation the following error occurs: [mpeg4 @ 0000018e88722b80] timebase 1000/119193 not supported by MPEG 4 standard, the maximum admitted value for the timebase denominator is 65535 In [project] I can set frame_rate to an int instead of 'auto'. |
Hi, sorry for the late answer, I'm pretty swamped with work this week... I'll do my best to spend some time on issues by the end of the week! |
Hi, tried raw RTMPose (COCO_133) with leightweight and performance. In my CSV-File I now have (original) keypoint-Column-Names via: def getInfos(keypoints, openpose_skeleton):
# https://github.com/Tau-J/rtmlib/blob/main/rtmlib/visualization/draw.py
num_keypoints = keypoints.shape[1]
if openpose_skeleton:
if num_keypoints == 18:
skeleton = 'openpose18'
elif num_keypoints == 134:
skeleton = 'openpose134'
elif num_keypoints == 26:
skeleton = 'halpe26'
else:
raise NotImplementedError
else:
if num_keypoints == 17:
skeleton = 'coco17'
elif num_keypoints == 133:
skeleton = 'coco133'
elif num_keypoints == 21:
skeleton = 'hand21'
elif num_keypoints == 26:
skeleton = 'halpe26'
else:
raise NotImplementedError
skeleton_dict = eval(f'{skeleton}')
keypoint_info = skeleton_dict['keypoint_info']
skeleton_info = skeleton_dict['skeleton_info']
return keypoint_info, skeleton_info
def getKeypointNamesAndIds(keypoints, Coord2DPlusScore=False, openpose_skeleton=True):
keypoint_info, skeleton_info = getInfos(keypoints, openpose_skeleton)
keypointNames = []
keypointIds = []
for i, kpt_info in keypoint_info.items():
if Coord2DPlusScore:
keypointNames.append(kpt_info['name'] + "_x")
keypointNames.append(kpt_info['name'] + "_y")
keypointNames.append(kpt_info['name'] + "_score")
keypointIds.append(str(kpt_info['id']) + "_x")
keypointIds.append(str(kpt_info['id']) + "_y")
keypointIds.append(str(kpt_info['id']) + "_score")
else:
keypointNames.append(kpt_info['name'])
keypointIds.append(kpt_info['id'])
return keypointNames, keypointIds min(score) for lightweight was about 0,24 (, which btw. is not bad regarding only one camera was used from left behind) If no person is in the frame, this is detected (resized by namedWindow): If a person is detected, the sport devices (see above) are not detected. So:
Sounds a little strange. |
This is why I do not recommend using a checkerboard for extrinsic parameters, but rather coordinates from objects in the scene. Then, as long as they are not symmetric, you can have cameras looking in any direction (see the appropriate section in the documentation: https://github.com/perfanalytics/pose2sim?tab=readme-ov-file#calculate-from-scratch).
It should not allow it indeed, did this happen? Do you have a model and a mot file for me to check? (if the scaling is particularly wrong, you could have very weird things though)
Are you telling me that pose estimation fails at 120 fps? At what point does it fail? Can you (generally) give me the full error message? Did you manage to solve it? Concerning the confidence score that is sometimes higher than 1, it is surprising but they do let it be on purpose, see there: open-mmlab/mmpose#884 |
Hi,
I have 2 Cameras. One of the camera could see my right small toe all of the time, the other not.
So after FIltering I have nan in the trc-File for the corresponding Landmark/Keypoint.
When I do Inverse Kinematics to get the angles, I get an error message:
Lowering the thresholds did not work.
I am not able to work with more than two Cameras as they need high FPS (>=100) in my case.
In this specific case I am able to adjust the positions of the cameras.
But, in another case I want to track arm and fingers and it will happen that there are gaps, which are even larger than (large) interp_if_gap_smaller_than-Parameter, especially for landmarks/keypoints, which are in a specific case not interesting.
How am I able to get InverseKinematics running?
I could replace nan with 0.0?
I could (at least try to) remove the non interesting landmark(s)?
Any recommendations?
Interestingly the filtering has a problem with that not the triangulation, which may decide to take the Landmark/Keypoint from the Camera, that is able to see it.
The text was updated successfully, but these errors were encountered: