Alignment Network module¶
This module contains the architecture of the alignment model fBAN_v1 that aligns the input scan to a reference coordinate system. The rotations in this model are represented by quaternions. The model consists of predicting coarse and fine alignment parameters after each other.
More details about the architecture can be found in the NeuroImage paper (Moser et al. NeuroImage, 2022)).
[Local] Network code copied from the OMNI cluster: ../../../shared/stru0039/fBAN/v1/model.py
- class fetalbrain.alignment.fBAN_v1.AlignmentModel(shape_data: list[int] = [1, 160, 160, 160], dimensions_hidden: list[int] = [16, 32, 64, 128, 256, 128, 64, 32], size_kernel: int = 3)[source]¶
Bases:
ModuleModel class for the alignment model fBAN which predicts the alignment parameters for a given scan.
- Parameters:
shape_data – the input shape of the image
dimensions_hidden – the dimensions of the hidden layers in the network
size_kernel – convolutional kernel size, defaults to 3
- Example:
>>> model = AlignmentModel()
- forward(x: Tensor) tuple[Tensor, Tensor, Tensor][source]¶
Forward pass of the alignment model
- Parameters:
x – batch of data of size [B, 1, H, W, D] with pixel values between 0 and 255
- Returns:
translation – translation parameters of size [B, 3] between 0 and 1
rotation – rotation parameters of size [B, 4], represented as quarternions
scaling – scaling parameters of size [B, 3] between 0 and 1
- Example:
>>> model = AlignmentModel() >>> dummy_scan = torch.rand(1,1,160,160,160) >>> translation, rotation, scaling = model(dummy_scan)