Skip to content

antsRegistration with masks #1864

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
smbailes-bu opened this issue Mar 25, 2025 · 13 comments
Open

antsRegistration with masks #1864

smbailes-bu opened this issue Mar 25, 2025 · 13 comments

Comments

@smbailes-bu
Copy link

smbailes-bu commented Mar 25, 2025

Operating system and version

Rocky Linux 8.6 (Green Obsidian) on computing cluster

CPU architecture

x86_64 (PC, Intel Mac, other Intel/AMD)

ANTs code version

2.1.0-g78931

ANTs installation type

Other (please specify below)
I am using an installation on a shared computing cluster, so I am not positive how it was installed.

Summary of the problem

I am trying to use antsRegistration to register my functional BOLD data to my anatomical MRI data using a target ROI mask because I want to make sure that the 4th ventricle in particular is well registered. Something similar has been successfully done in my lab using the brainstem, so I just adapted the script to instead use the 4th ventricle masks. However, I consistently get this error at the first rigid registration stage:

Exception caught:
itk::ExceptionObject (0x3883af0)
Location: "unknown"
File: /home/ec2-user/ants-binaries/ITKv4-install/include/ITK-4.7/itkMattesMutualInformationImageToImageMetricv4.hxx
Line: 430
Description: itk::ERROR: MattesMutualInformationImageToImageMetricv4(0x37a4a00): Too many samples map outside moving image buffer. There are only 486 valid points out of 1045299 total points. The images do not sufficiently overlap. They need to be initialized to have more overlap before this metric will work. For instance, you can align the image centers by translation.

In the original script from my lab they didn't use the --initial-moving-transform flag so I added that thinking that would fix the issue, but it did not. I also tried doing the initial transform for all the pairs of target/moving masks, but it still didn't fix my error, it only changes the number of valid points that overlap.

I also loaded all the volumes I use into freeview and verified that they are not empty, are binary when necessary, and are very well aligned to begin with (see screenshot).

I have also played around with different interpolations and metrics for the Rigid registration (MI, CC), but again get the same error just with slightly different numbers of valid points.

Any advice on this issue would be incredibly helpful. I feel like I have tried everything I personally can think of, but I am also pretty new to using ANTs.

Commands to reproduce the problem.

#!/bin/bash
target=brainmask.nii.gz
moving=run01_meanvol_stripped.nii.gz
output=reg01toref_ants_v4
target_mask=brainmask_bin.nii.gz
moving_mask=run01_meanvol_mask.nii.gz
target_mask_v4=v4_dil.nii.gz
moving_mask_v4=run01_sbref_synthseg_v4_dil.nii.gz

antsRegistration --dimensionality 3 --float 0
--output [$output,${output}.nii.gz]
--interpolation BSpline[5]
--initial-moving-transform [ $target,$moving,1]
--transform Rigid[ 0.1 ]
--metric Mattes[ ${target},${moving},1,64,Regular, 0.8 ]
--convergence [ 75x25x25,1e-7,10 ]
--shrink-factors 2x1x1
--smoothing-sigmas 1.0x0.5x0.0mm
--masks [ ${target_mask_v4},${moving_mask_v4} ]
--transform Rigid[ 0.1 ]
--metric Mattes[ ${target},${moving},1,64,Regular, 0.8 ]
--convergence [ 75x25x25,1e-7,10 ]
--shrink-factors 2x1x1
--smoothing-sigmas 1.0x0.5x0.0mm
--masks [ ${target_mask_v4},${moving_mask_v4} ]
--transform Affine[ 0.1 ]
--metric CC[ ${target},${moving},1,7,None ]
--convergence [ 450x150x100,1e-7,10 ]
--shrink-factors 3x2x1
--smoothing-sigmas 1.2011224087864498x0.735534255037358x0.0mm
--masks [ ${target_mask_v4},${moving_mask_v4} ]
--transform SyN[ 0.1 ]
--metric CC[ ${target},${moving},1,4,None,1,1 ]
--convergence [ 500x180x60x20x20,1e-6,10 ]
--shrink-factors 4x3x2x1x1
--smoothing-sigmas 2.0x1.5x1.0x0.5x0.0mm
--masks [ NOMASK, NOMASK ]
--transform SyN[ 0.1 ]
--metric CC[ ${target},${moving},1,4,None,1,1 ]
--convergence [ 500x180x60x20x20,1e-6,10 ]
--shrink-factors 4x3x2x1x1
--smoothing-sigmas 2.0x1.5x1.0x0.5x0.0mm
--masks [ ${target_mask_v4},${moving_mask_v4} ]

Output of the command with verbose output.

log.txt

Data to reproduce the problem

data.zip

@cookpa
Copy link
Member

cookpa commented Mar 25, 2025

If the mask is too small compared to the image, the registration will fail. I'm guessing the brain stem had just about enough points to not trigger the error.

I think the best way to do this would be to segment the 4th ventricle in both images, then use a multi-metric approach: MI or CC for the brain images, and Mean squares for the segmentations.

@smbailes-bu
Copy link
Author

Image

This is the screenshot showing the volumes and masks. The green is the anatomically defined v4 mask and the blue outline is the functional v4 mask. I dilated each mask to make it larger hoping that would also help with the smaller mask size, but based on the screenshots would you say that the masks are still too small?

@ntustison
Copy link
Member

Just an FYI but have you ever considered using ANTsPy or ANTsR? Both of those have a tool (label_image_registration and labelImageRegistration, respectively) specifically designed for label images while also allowing for inclusion of intensity images.

@smbailes-bu
Copy link
Author

No I have not tried those before, but I will look into them. I have just been getting familiar with command line ANTs but will look into that as well!

@ntustison
Copy link
Member

Yeah, that would be my recommendation, especially if you're just getting started. Here's an example of how to run the tool in both ANTsPy and ANTsR. It's not the most meaningful example but it does demonstrate usage.

@smbailes-bu
Copy link
Author

smbailes-bu commented Mar 25, 2025

What is the difference between using ants.label_image_registration as opposed to using ants.registration with mask parameters? In my installation of ANTsPy when I try to call ants.label_image_registration I get this error: AttributeError: module 'ants' has no attribute 'label_image_registration

@ntustison
Copy link
Member

A mask specifies the region of similarity metric and gradient calculation of the associated intensity images over a registration stage. Label image registration, on the other hand, uses each unique label as designating distinct binary image pairs as input to the MSQ metric since accurate boundaries are assumed.

In my installation of ANTsPy when I try to call ants.label_image_registration I get this error: AttributeError: module 'ants' has no attribute 'label_image_registration

It was only introduced a couple months ago. Did you install directly from the current GitHub repo?

@smbailes-bu
Copy link
Author

No I used pip because when I tried installing from the Github repo but when it tries to build the wheel it seems to get stuck. I will try again and be more patient and see if it works.

If what I care about is making sure that the 4th ventricle in functional space is well-registered to anatomical space would you expect just using antsRegistration with the mask and moving_mask set be sufficient? The other thing to note is that I am currently dilating the anatomically defined 4th ventricle (freesurfer recon output) and the functionally defined 4th ventricle (freesurfer synthseg). If I were to use the label_image_registration I assume I would not want to dilate the masks since it sounds like it depends on accurate boundaries? The idea of using the dilated masks would be to allow for some room for error in the synthseg defined masks.

@ntustison
Copy link
Member

ntustison commented Mar 26, 2025

If what I care about is making sure that the 4th ventricle in functional space is well-registered to anatomical space would you expect just using antsRegistration with the mask and moving_mask set be sufficient?

If you want to make sure that the 4th ventricle is aligned, leveraging regional labels for that region (whether manually delineated or through automated methods) is going to generally lead to better alignments than using intensity-only information through the registration optimization, even if the latter employs masking. In other words, you currently have trained networks (freesurfer-based) that are providing estimates of the regions you want to align which are leveraging the intensity information in a much more sophisticated way than the registration algorithm.

@smbailes-bu
Copy link
Author

Are the regional label inputs the same as the binary masks of the regions? Or can/should you use a probabilistic label?

@ntustison
Copy link
Member

You can use the probabilistic labels and it would be similar to what's done under the hood with label_image_registration. It just simplifies general registration scenarios involving segmentations.

@smbailes-bu
Copy link
Author

Thank you both so much for your help.

One last question, for the label_image_registration do you input an entire segmentation file? Or can I just use the mask for one region? Or does it have to be a .label file? I tried using just the anat_v4.nii.gz and func_v4.nii.gz as the 'fixed_label_images' and 'moving_label_images' respectively but got an error: 'ValueError: Number of labels must be >= 3."

Is this because I am only inputting a single binary mask for both _label_images inputs? In which case could I, for example, use the aseg.mgz for the fixed_label_images and the synthseg segmentation for the moving_label_images? Or should I just include more binary masks such as brainstem and thalamus to use more than 3 labelsr? I can also move this discussion to the ANTsPy issues page if this question requires a more in depth answer. Thanks again.

@ntustison
Copy link
Member

Yeah, please go ahead and move the discussion to the ANTsPy repo. Regardless of which direction you ultimately decide, I would recommend using ants.registration(...) instead of the antsRegistration command line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants