Kelluwen transforms module¶
This module contains helper functions to apply alignment parameters resulting from fBAN to an image. It is likely not necessary to access these functions directly when doing standard image alignment operations, as most are shadowed by functions in the main align.py module.
These functions have been taken from the Kelluwen Github.
The functions were copied over to minimise dependendencies, and their style adjusted to match the rest of the codebase.
Module functions¶
- fetalbrain.alignment.kelluwen_transforms.apply_affine(image: Tensor, transform_affine: Tensor, shape_output: Size | list[int] | None = None, type_resampling: Literal['bilinear', 'nearest'] = 'bilinear', type_origin: Literal['centre', 'origin'] = 'centre', type_output: Literal['positional', 'named'] = 'positional') Tensor | dict[str, Tensor][source]¶
Applies affine transform to tensor.
- Parameters:
image – image being transformed. Must be of shape (B, C, *).
transform_affine – affine transform being applied. Must be of shape (B, C, *), (B, 1, *), or (B, *).
shape_output – Output shape of transformed image. Must have the same batch and channel as image. If None, the output_shape=image.shape. Defaults to None.
type_resampling – interpolation algorithm used when sampling image. Available: bilinear, nearest
type_origin – point around which the transform is applied, defaults to centre
type_output – Determines how the outputs are returned. If set to positional, it returns positional outputs. If set to named, it returns a dictionary with named outputs.
- Returns:
image_transformed
Example
>>> image = torch.rand((1, 1, 160, 160, 160)) >>> identity_affine = torch.eye(4, dtype=torch.float32).unsqueeze(0).unsqueeze(0) >>> image_transformed = apply_affine(image, identity_affine)
- fetalbrain.alignment.kelluwen_transforms.deconstruct_affine(transform_affine: Tensor, transform_order: Literal['trs', 'tsr', 'rts', 'rst', 'str', 'srt'] = 'srt', type_rotation: Literal['euler_xyz', 'euler_xzy', 'euler_yxz', 'euler_yzx', 'euler_zxy', 'euler_zyx', 'quaternions'] = 'euler_xyz', type_output: Literal['positional', 'named'] = 'positional') tuple | dict[str, Tensor][source]¶
Deconstructs the affine transform into its conforming translation, rotation, and scaling parameters.
- Parameters:
transform_affine – Affine transform being deconstructed. Must be of shape (B, C, H, W, D) or (B, H, W, D).
transform_order – Order of multiplication of translation, rotation, and scaling transforms, defaults to ‘srt’
type_rotation – Type of rotation parameters: quaternions or Euler angles. For Euler angles, the order of the multiplication of the rotations around x, y, and z is represented in the name (euler_xyz, euler_yzx, etc.), defaults to ‘euler_xyz’
type_output – Determines how the outputs are returned. If set to positional, it returns positional outputs. If set to named, it returns a dictionary with named outputs, defaults to ‘positional’
- Returns:
parameter_translation – tensor of size (B, C, 3) (channel dimension is optional, based on whether channel dimension is present in input affine)
parameter_rotation – tensor of size (B, C, 3) (if type_rotation is euler) or (B, C, 4) (if type_rotation is ‘quaternions’)
parameter_scaling – tensor of size [B, C, 3]
Example
>>> transform_affine = torch.eye(4,4, dtype=torch.float32).unsqueeze(0) >>> transl, rot, scale = deconstruct_affine(transform_affine, transform_order='srt', type_rotation='euler_xyz', type_output='positional')
- fetalbrain.alignment.kelluwen_transforms.generate_affine(parameter_translation: Tensor, parameter_rotation: Tensor, parameter_scaling: Tensor, type_output: Literal['positional'] = 'positional', type_rotation: Literal['euler_xyz', 'euler_xzy', 'euler_yxz', 'euler_yzx', 'euler_zxy', 'euler_zyx', 'quaternions'] = 'euler_xyz', transform_order: Literal['trs', 'tsr', 'rts', 'rst', 'str', 'srt'] = 'trs') Tensor[source]¶
- fetalbrain.alignment.kelluwen_transforms.generate_affine(parameter_translation: Tensor, parameter_rotation: Tensor, parameter_scaling: Tensor, type_output: Literal['named'], type_rotation: Literal['euler_xyz', 'euler_xzy', 'euler_yxz', 'euler_yzx', 'euler_zxy', 'euler_zyx', 'quaternions'] = 'euler_xyz', transform_order: Literal['trs', 'tsr', 'rts', 'rst', 'str', 'srt'] = 'trs') dict[str, Tensor]
Generates an affine transform from translation, rotation, and scaling parameters.
- Parameters:
parameter_translation – Translation parameters in pixels between -80 and 80. Must be of shape (B, C, parameters) or (B, parameters), with parameters=2 or 3 for 2D and 3D images, respectively.
parameter_rotation – Rotation parameters in radians. Must be of shape (B, C, parameters) or (B, parameters), with parameters=1, 3 or 4, for 2D, 3D Euler angles, and 3D quaternions, respectively.
parameter_scaling – Scaling parameters. Must be of shape (B, C, parameters) or (B, parameters), with parameters=2 or 3 for 2D and 3D images, respectively.
type_rotation – Type of rotation parameters: quaternions or Euler angles. For Euler angles, the order of the multiplication of the rotations around x, y, and z is represented in the name (euler_xyz, euler_yzx, etc.), defaults to “euler_xyz”
transform_order – Order of multiplication of translation, rotation, and scaling transforms, defaults to “trs”
type_output – Determines how the outputs are returned. If set to “positional”, it returns positional outputs.
"named" (If set to)
'positional'. (it returns a dictionary with named outputs. Defaults to)
- Returns:
transform_affine – torch.Tensor of shape (B, C, 4, 4)
or dictionary – {“transform_affine”: transform_affine}
Example
>>> parameter_translation = torch.rand((1, 3)) >>> parameter_rotation = torch.rand((1, 4)) >>> parameter_scaling = torch.rand((1, 3))
>>> transform_affine = generate_affine(parameter_translation, parameter_rotation, parameter_scaling, type_rotation='quaternions')
- fetalbrain.alignment.kelluwen_transforms.generate_rotation(parameter_rotation: Tensor, type_rotation: Literal['euler_xyz', 'euler_xzy', 'euler_yxz', 'euler_yzx', 'euler_zxy', 'euler_zyx', 'quaternions'] = 'euler_xyz', type_output: Literal['positional', 'named'] = 'positional') Tensor | dict[str, Tensor][source]¶
Generates a rotation transform from rotation parameters.
- Parameters:
parameter_rotation – Rotation parameters. Must be of shape (B, C, parameters) or (B, parameters), with parameters=1, 3 or 4, for 2D, 3D Euler angles, and 3D quaternions, respectively.
type_rotation – Type of rotation parameters: quaternions or Euler angles. For Euler angles, the order of the multiplication of the rotations around x, y, and z is represented in the name (euler_xyz, euler_yzx, etc.) This variable with be ignored for 2D rotations. defaults to euler_xyz
type_output – Determines how the outputs are returned. If set to “positional”, it returns positional outputs. If set to “named”, it returns a dictionary with named outputs. Defaults to positional
- Returns:
transform_rotation – tensor of shape (B, C, 4, 4)
Example
# Rotation transform for quaternions >>> parameter_rotation = torch.rand((1, 4)) >>> transform_rotation = generate_rotation(parameter_rotation, type_rotation=”quaternions”)
# Rotation transform for Euler angles >>> parameter_rotation = torch.rand((1, 3)) >>> transform_rotation = generate_rotation(parameter_rotation, type_rotation=”euler_xyz”)
- fetalbrain.alignment.kelluwen_transforms.generate_scaling(parameter_scaling: Tensor, type_output: Literal['positional', 'named'] = 'positional') Tensor | dict[str, Tensor][source]¶
Generates a scaling transform from scaling parameters.
- Parameters:
parameter_scaling – Scaling parameters. Must be of shape (B, C, parameters) or (B, parameters), with parameters=2 or 3 for 2D and 3D images, respectively.
type_output – Determines how the outputs are returned. If set to “positional”, it returns positional outputs. If set to “named”, it returns a dictionary with named outputs. Defaults to positional.
- Returns:
transform_scaling – tensor of shape (B, C, 4, 4)
Example
>>> parameter_scaling = torch.rand((1, 3)) >>> transform_scaling = generate_scaling(parameter_scaling)
- fetalbrain.alignment.kelluwen_transforms.generate_translation(parameter_translation: Tensor, type_output: Literal['positional', 'named'] = 'positional') Tensor | dict[str, Tensor][source]¶
Generates a translation transform from translation parameters.
- Parameters:
parameter_translation – Translation parameters. Must be of shape (B, C, parameters) or (B, parameters), with parameters=2 or 3 for 2D and 3D images, respectively.
type_output – Determines how the outputs are returned. If set to “positional”, it returns positional outputs. If set to “named”, it returns a dictionary with named outputs, defaults to “positional”
- Raises:
ValueError – An error occured because dimension of batched translation parameters is >3
ValueError – An error occured because length of translation parameters is not 2 or 3
ValueError – An error occured because type_output is not “positional” or “named”
- Returns:
transform_translation – torch.Tensor of shape (B, C, 4, 4)
or dictionary – {“transform_affine”: transform_affine}
Example
>>> parameter_translation = torch.rand((1, 3)) >>> transform_translation = generate_translation(parameter_translation)