Shaun API Documentation

Submodules

smug.shaun_blocks module

class smug.shaun_blocks.ConvBlock(in_channels, out_channels, kernel=3, stride=1, pad='reflect', bias=False, normal='batch', activation='relu', upsample=False, **kwargs)[source]

Bases: Module

A modifiable convolutional layer for deep networks.

Parameters
  • in_channels (int) – The number of channels fed into the convolutional layer.

  • out_channels (int) – The number of channels fed out of the convolutional layer.

  • kernel (int, optional) – The size of the convolutional kernel. Default is 3 e.g. 3x3 convolutional kernel.

  • stride (int, optional) – The stride of the convolution. Default is 1.

  • pad (str, optional) – The type of padding to use when calculating the convolution. Default is “reflect”.

  • bias (bool, optional) – Whether or not to include a bias in the linear transformation. Default is False.

  • normal (str, optional) – The type of normalisation layer to use. Default is “batch”.

  • activation (str, optional) – The activation function to use. Default is “relu” to use the Rectified Linear Unit (ReLU) activation function.

  • upsample (bool, optional) – Whether or not to upsample the input to the layer. This is useful in decoder layers in autoencoders. Upsampling is done via a factor of 2 interpolation (it is only currently implemented assuming the size of the input is to be doubled, will be retconned to work for me if there is demand). Default is False.

forward(inp)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class smug.shaun_blocks.ConvTranspBlock(in_channels, out_channels, kernel=3, stride=1, bias=False, pad='zeros', normal='batch', activation='relu', **kwargs)[source]

Bases: Module

A modifiable transpose conovlutional layer.

Parameters
  • in_channels (int) – The number of channels fed into the convolutional layer.

  • out_channels (int) – The number of channels fed out of the convolutional layer.

  • kernel (int, optional) – The size of the convolutional kernel. Default is 3 e.g. 3x3 convolutional kernel.

  • stride (int, optional) – The stride of the convolution. Default is 1.

  • pad (str, optional) – The type of padding to use when calculating the convolution. Default is “reflect”.

  • bias (bool, optional) – Whether or not to include a bias in the linear transformation. Default is False.

  • normal (str, optional) – The type of normalisation layer to use. Default is “batch”.

  • activation (str, optional) – The activation function to use. Default is “relu” to use the Rectified Linear Unit (ReLU) activation function.

forward(inp)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class smug.shaun_blocks.ResBlock(in_channels, out_channels, kernel=3, stride=1, pad='reflect', bias=False, normal='batch', activation='relu', upsample=False, use_dropout=False)[source]

Bases: Module

A modifiable residual block for deep neural networks.

Parameters
  • in_channels (int) – The number of channels fed into the residual layer.

  • out_channels (int) – The number of channels fed out of the residual layer.

  • kernel (int, optional) – The size of the convolutional kernel. Default is 3 e.g. 3x3 convolutional kernel.

  • stride (int, optional) – The stride of the convolution. Default is 1.

  • pad (str, optional) – The type of padding to use when calculating the convolution. Default is “reflect”.

  • bias (bool, optional) – Whether or not to include a bias in the linear transformation. Defulat is False.

  • normal (str, optional) – The type of normalisation layer to use. Default is “batch”.

  • activation (str, optional) – The activation function to use. Default is “relu” to use the Rectified Linear Unit (ReLU) activation function.

  • upsample (bool, optional) – Whether or not to upsample the input to the layer. This is useful in decoder layers in autoencoders. Upsampling is done via a factor of 2 interpolation (it is only currently implemented assuming the size of the input is to be doubled, will be retconned to work for me if there is demand). Default is False.

forward(inp)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

smug.shaun_inference module

class smug.shaun_inference.Corrector(ckp, norm, map_location, in_channels=1, out_channels=1, nef=64, error=None)[source]

Bases: object

This is the object to correct for seeing in observations.

Parameters
  • ckp (Dict, containing the torch checkpoint) – The data needed to reconstruct the Shaun model.

  • norm (int) – The normalisation factor used during training (e.g. 1514 for CaII8542, and 3145 for Halpha).

  • map_location (torch.device, optional) – Where to map the model for inference.

  • in_channels (int) – The number of channels of the input images.

  • out_channels (int) – The number of channels of the output images.

  • nef (int) – The number of base feature maps used in the first convolutional layer.

  • error (float, optional) – The error on the estimates from the network. Default is None which takes the last training error from the model file.

correct_image(img)[source]

This class method does the correction on the images.

Parameters

img (numpy.ndarray) – The image to be corrected by the network.

class smug.shaun_inference.SpeedyCorrector(model_path, error=None)[source]

Bases: Corrector

This class is to do the same corrections but using the torchscript model. This is ~4x faster but limits the batch size to 16.

N.B. Currently the torchscript models are not provided as part of smug.

Parameters
  • model_path (str) – The path to the torchscript model.

  • error (float) – The error to apply to the estimates.

smug.shaun_inference.pretrained_shaun_corrector(line: str, version: str = '1.0.0', map_location: Optional[device] = None) Corrector[source]

Load a pretrained Shaun model in a corrector. The weights will be cached As described by torch.hub. See their documentation for details.

Parameters
  • line (str) – The spectral line to load the weights for, “Halpha” or “CaII8542”

  • version (str, optional) – The version of the model to load. Default: 1.0.0

  • map_location (torch.device, optional) – Where to remap arrays during the loading process, by default this is set to “CPU” to allow loading on any platform.

smug.shaun_model module

class smug.shaun_model.Shaun(in_channels, out_channels, nef)[source]

Bases: Module

This is the base class for the neural network model for correcting for seeing.

Parameters
  • in_channels (int) – The number of channels that the images have.

  • out_channels (int) – The number of channels that the output images have.

  • nef (int) – The base number of feature maps to use for the first convolutional layer.

forward(inp)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.