-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aliasing when computing mismatch under a scaling transformation #86
Comments
One thought: since we do separately compute the denominator & numerator (glad we chose to keep track of those separately!), what happens if you modify the objective function to return |
How is what you're describing different than using the |
Hmm, you're right, it should be the same because of that But I guess the issue is really that it's "blowing up" a small portion of the moving image? So in fact I guess you don't get a lot of NaNs. Perhaps want to add a critierion that we "used" some large fraction of the pixels. One way to do that (not sure it's the best...) would be to create an array that has the for i in warpedinds
if isfinite(i)
used[Int(i)] = true
end
end That marks any voxel |
Some 3D affine registration tests don't pass reliably so I have commented them out. I spent some time looking into this and my suspicion is that when optimizing full affine transforms the mismatch objective function is especially prone to local minima for scaled versions of the images. So typically QuadDIRECT jumps directly to the most extreme scaling in the search space, and sometimes it gets stuck there and never finds the better, more moderate scaling. Typically it expands (rather than contracts) the moving image.
I have a guess at why it tends to do this: expanding moving means that there will be fewer pixels of overlap so the denominator of the mismatch will go down. Ideally the numerator would also go down proportionally (that's the whole purpose of the normalization, to get similar answers with many or few pixels of overlap). However since the moving image is expanded the space between overlapping pixels is larger than one pixel in the fixed image. This gives the algorithm an unwanted degree of freedom to work with: if a particular pixel in fixed can't be matched well then the moving image can be positioned to skip over that pixel so that it doesn't get compared to a moving pixel and never enters into the mismatch calculation. With larger scale factors the algorithm gets more and more freedom to avoid unwanted pixel comparisons.
This can be seen as a kind of aliasing, so one way that I could imagine addressing this particular issue is to lowpass filter the fixed image before comparing it to an expanded moving image. I'm not convinced this would solve all issues but it may help. Another idea is to address this by changing the default indices where warp gets evaluated so that the moving image gets oversampled and we avoid skipping pixels in fixed. I kind of prefer the second idea, but this seems to require that warp allow evaluations at non-integer indices which doesn't seem possible right now.
The text was updated successfully, but these errors were encountered: