Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aliasing when computing mismatch under a scaling transformation #86

Open
Cody-G opened this issue Jul 11, 2018 · 3 comments
Open

Aliasing when computing mismatch under a scaling transformation #86

Cody-G opened this issue Jul 11, 2018 · 3 comments

Comments

@Cody-G
Copy link
Contributor

Cody-G commented Jul 11, 2018

Some 3D affine registration tests don't pass reliably so I have commented them out. I spent some time looking into this and my suspicion is that when optimizing full affine transforms the mismatch objective function is especially prone to local minima for scaled versions of the images. So typically QuadDIRECT jumps directly to the most extreme scaling in the search space, and sometimes it gets stuck there and never finds the better, more moderate scaling. Typically it expands (rather than contracts) the moving image.

I have a guess at why it tends to do this: expanding moving means that there will be fewer pixels of overlap so the denominator of the mismatch will go down. Ideally the numerator would also go down proportionally (that's the whole purpose of the normalization, to get similar answers with many or few pixels of overlap). However since the moving image is expanded the space between overlapping pixels is larger than one pixel in the fixed image. This gives the algorithm an unwanted degree of freedom to work with: if a particular pixel in fixed can't be matched well then the moving image can be positioned to skip over that pixel so that it doesn't get compared to a moving pixel and never enters into the mismatch calculation. With larger scale factors the algorithm gets more and more freedom to avoid unwanted pixel comparisons.

This can be seen as a kind of aliasing, so one way that I could imagine addressing this particular issue is to lowpass filter the fixed image before comparing it to an expanded moving image. I'm not convinced this would solve all issues but it may help. Another idea is to address this by changing the default indices where warp gets evaluated so that the moving image gets oversampled and we avoid skipping pixels in fixed. I kind of prefer the second idea, but this seems to require that warp allow evaluations at non-integer indices which doesn't seem possible right now.

@timholy
Copy link
Member

timholy commented Jul 12, 2018

One thought: since we do separately compute the denominator & numerator (glad we chose to keep track of those separately!), what happens if you modify the objective function to return Inf or NaN whenever it drops below some threshold? IIRC, I put some effort into try to get QuadDIRECT to do something reasonable in such cases. It's not entirely straightforward because naively it messes up the quadratic interpolation/extrapolation, so I think it tends to cause QuadDIRECT to take quite a few more iterations. But perhaps it stands a chance of preventing this problem.

@Cody-G
Copy link
Contributor Author

Cody-G commented Jul 12, 2018

How is what you're describing different than using the thresh parameter to set the minimum acceptable overlap (see indmin_mismatch). I did try increasing thresh but it doesn't seem to help very much. Maybe there is some perfect value that would help, but I think it would be pretty difficult to tune thresh in a way that circumvents this sampling issue.

@timholy
Copy link
Member

timholy commented Jul 12, 2018

Hmm, you're right, it should be the same because of that typemax call. (Wow, I'd forgotten that was a generated function. No need for that now...)

But I guess the issue is really that it's "blowing up" a small portion of the moving image? So in fact I guess you don't get a lot of NaNs. Perhaps want to add a critierion that we "used" some large fraction of the pixels. One way to do that (not sure it's the best...) would be to create an array that has the linearindices of moving, then warp it with the transformation using nearest-neighbor interpolation. That will give you another array with linear indices (you want them expressed as Float64 so you have NaNs where appropriate). Then you can create a used = fill(false, indices(moving)) array,
and then

for i in warpedinds
    if isfinite(i)
        used[Int(i)] = true
    end
end

That marks any voxel true if it was used by the transformation. sum(used) should give an estimate of much of the original image you actually used. Because of the nearest-neighbor interpolation you might get some holes, so one might even consider doing a dilate operation before computing the sum.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants