Alignment

This module contains the main functions for aligning the scans. A single scan be aligned using the align_scan() function, which is a wrapper that loads the alignment model, prepares the scan into pytorch and computes and applies the alignment transformation. The alignment can be applied without scaling (i.e. preserving the size of the brain) or with scaling (i.e. scaling all images to the same reference brain size at 30GWs):

dummy_scan = np.random.rand(160, 160, 160)
aligned_scan, params = align_scan(dummy_scan, scale=False, to_atlas=True)
aligned_scan_scaled, params = align_scan(dummy_scan, scale=True)

For aligning a large number of scans, it is recommended to access the functions load_alignment_model(), prepare_scan() and the align_to_atlas() functions directly so that the alignment model is not reloaded for the alignment of each scan. For example as follows:

model = load_alignment_model()
dummy_scan = np.random.rand(160, 160, 160)
torch_scan = prepare_scan(dummy_scan)
aligned_scan, params = align_to_atlas(torch_scan, model, scale = False)

The align_to_atlas() function can also process batches of data (i.e. multiple scans at once), which can be useful to speed up analysis. More advanced examples can be found in the Example Gallery.

Module functions

fetalbrain.alignment.align.align_scan(scan: ndarray, scale: bool = False, to_atlas: bool = True) tuple[Tensor, dict][source]

align a scan to a reference coordinate system

This function aligns the input scan to either the atlas or bean coordinate system, with the atlas space as default. This function is a wrapper that loads the alignment model, prepares the scan and computes and applies the transformation.

Parameters:
  • scan – array containing the scan of size [H,W,D]

  • scale – whether to apply scaling. Defaults to False.

  • to_atlas – whether to align to the atlas coordinate system, otherwise the BEAN coordinate system is used (internal use). Defaults to True.

Returns:

aligned_scan – the aligned scan

Example

>>> dummy_scan = np.random.rand(160, 160, 160)
>>> aligned_scan, params = align_scan(dummy_scan)
fetalbrain.alignment.align.align_to_atlas(image: Tensor, model: AlignmentModel, return_affine: Literal[False] = False, scale: bool = False) tuple[Tensor, dict][source]
fetalbrain.alignment.align.align_to_atlas(image: Tensor, model: AlignmentModel, return_affine: Literal[True], scale: bool = False) tuple[Tensor, dict, Tensor]

Aligns the scan to the atlas coordinate system using the fban model. The function makes a prediction to go from the orientation to the bean coordinates, and then applies an additional transformation to go to the atlas orientation. The bean to atlas transformation is only well defined for scaled image volumes, so the affine transformation is always generated including scaling. If scaling is set to False, the inverse scaling transform is applied after transformation to the atlas space.

data flow (scale = True): unscaled bean –> unscaled atlas space -> scaled atlas space

data flow (scale = False): unscaled bean –> unscaled atlas space

Parameters:
  • image – tensor of size [B,1,H,W,D] containing the image(s) to align with pixel values between 0 and 255

  • model – the model used for inference

  • scale – whether to apply scaling, defaults to False

  • return_affine – whether to return the affine transformation, defaults to False

Returns:
  • aligned_to_atlas_scan – tensor of size [B,1,H,W,D] containing the aligned image(s)

  • param_dict – dictionary containing the applied parameters

  • affine (optional) – tensor containing the affine transformation of size [B,4,4]

Example

>>> model = load_alignment_model()
>>> dummy_scan = np.random.rand(160, 160, 160)
>>> torch_scan = prepare_scan(dummy_scan)
>>> aligned_scan, params = align_to_atlas(torch_scan, model)
fetalbrain.alignment.align.align_to_bean(image: Tensor, model: AlignmentModel, return_affine: Literal[False] = False, scale: bool = False) tuple[Tensor, dict][source]
fetalbrain.alignment.align.align_to_bean(image: Tensor, model: AlignmentModel, return_affine: Literal[True], scale: bool = False) tuple[Tensor, dict, Tensor]

Aligns the scan to the bean coordinate system using the fban model.

Parameters:
  • image – tensor of size [B,1,H,W,D] containing the image(s) to align between 0 and 255

  • model – the model used for inference

  • return_affine – whether to return the affine transformation, defaults to False

  • scale – whether to apply scaling, defaults to False

Returns:
  • aligned_image – tensor of size [B,1,H,W,D] containing the aligned image(s)

  • params – dictionary containing the applied parameters

  • affine (optional) – tensor containing the affine transformation of size [B,4,4]

Example

>>> model = load_alignment_model()
>>> dummy_scan = np.random.rand(160, 160, 160)
>>> torch_scan = prepare_scan(dummy_scan)
>>> aligned_scan, params = align_to_bean(torch_scan, model)
fetalbrain.alignment.align.load_alignment_model(model_path: Path | None = None) AlignmentModel[source]

Load the fBAN alignment model

Parameters:

model_path – path to the trained model, defaults to None (uses the default model)

Returns:

model – alignment model with trained weights loaded

Example

>>> model = load_alignment_model()
fetalbrain.alignment.align.prepare_scan(image: ndarray | Tensor) Tensor[source]

prepares the scan for subcortical segmentation

Parameters:

image – numpy array or tensor of size [B, C, H, W, D], or [B, H, W, D], or [H, W, D]

Returns:

tensor of size [B, C, H, W, D] with values between 0 and 255

Example

>>> image = np.random.random_sample((1, 1, 160, 160, 160))
>>> image = prepare_scan(image)
>>> assert torch.max(image) > 1
>>> image = torch.rand((1, 1, 160, 160, 160))
>>> image = prepare_scan(image)
>>> assert torch.max(image) > 1
fetalbrain.alignment.align.transform_from_affine(image: Tensor, transform_affine: Tensor) Tensor[source]

Applies the given affine transformation to the input batch of images

Parameters:
  • image – tensor of size [B,1,H,W,D] containing the image(s) to align

  • transform_affine – tensor of size [B,4,4] containing the affine transformation(s)

Returns:

image_transformed – tensor containing the aligned image

Example

>>> dummy_scan = np.random.rand(160, 160, 160)
>>> torch_scan = prepare_scan(dummy_scan)
>>> transform_identity = torch.eye(4,4).unsqueeze(0)  # identity transform
>>> aligned_scan = transform_from_affine(torch_scan, transform_identity)
fetalbrain.alignment.align.transform_from_params(image: Tensor, translation: Tensor | None = None, rotation: Tensor | None = None, scaling: Tensor | None = None) Tensor[source]

Transforms the images in the input batch with the given translation, rotation and scaling parameters. If no parameters are given for a certain transformation, the default values that have no effect are used.

Parameters:
  • image – tensor of size [B,1, H,W,D] containing the image(s) to align

  • translation – tensor with size [B,3] containing translation for each axis between 0 and 1, defaults to None

  • rotation – tensor with size [B,4] containing rotation quarternions, defaults to None

  • scaling – tensor with size [B,3] containing the scaling parameters, defaults to None

Returns:

image_aligned – tensor of size [B,1,H,W,D] containing the aligned image(s)

Example

>>> dummy_scan = np.random.rand(160, 160, 160)
>>> torch_scan = prepare_scan(dummy_scan)
>>> translation = torch.tensor([[0.1, 0.05, 0.1]])
>>> aligned_scan = transform_from_params(torch_scan, translation=translation)