Shortcuts

mmagic.models.data_preprocessors

Package Contents

Classes

DataPreprocessor

Image pre-processor for generative models. This class provide

MattorPreprocessor

DataPreprocessor for matting models.

class mmagic.models.data_preprocessors.DataPreprocessor(mean: Union[Sequence[Union[float, int]], float, int] = 127.5, std: Union[Sequence[Union[float, int]], float, int] = 127.5, pad_size_divisor: int = 1, pad_value: Union[float, int] = 0, pad_mode: str = 'constant', non_image_keys: Optional[Tuple[str, List[str]]] = None, non_concentate_keys: Optional[Tuple[str, List[str]]] = None, output_channel_order: Optional[str] = None, data_keys: Union[List[str], str] = 'gt_img', input_view: Optional[tuple] = None, output_view: Optional[tuple] = None, stack_data_sample=True)[source]

Bases: mmengine.model.ImgDataPreprocessor

Image pre-processor for generative models. This class provide normalization and bgr to rgb conversion for image tensor inputs. The input of this classes should be dict which keys are inputs and data_samples.

Besides to process tensor inputs, this class support dict as inputs. - If the value is Tensor and the corresponding key is not contained in _NON_IMAGE_KEYS, it will be processed as image tensor. - If the value is Tensor and the corresponding key belongs to _NON_IMAGE_KEYS, it will not remains unchanged. - If value is string or integer, it will not remains unchanged.

Parameters
  • mean (Sequence[float or int], float or int, optional) – The pixel mean of image channels. Noted that normalization operation is performed after channel order conversion. If it is not specified, images will not be normalized. Defaults None.

  • std (Sequence[float or int], float or int, optional) – The pixel standard deviation of image channels. Noted that normalization operation is performed after channel order conversion. If it is not specified, images will not be normalized. Defaults None.

  • pad_size_divisor (int) – The size of padded image should be divisible by pad_size_divisor. Defaults to 1.

  • pad_value (float or int) – The padded pixel value. Defaults to 0.

  • pad_mode (str) – Padding mode for torch.nn.functional.pad. Defaults to ‘constant’.

  • non_image_keys (List[str] or str) – Keys for fields that not need to be processed (padding, channel conversion and normalization) as images. If not passed, the keys in _NON_IMAGE_KEYS will be used. This argument will only work when inputs is dict or list of dict. Defaults to None.

  • non_concatenate_keys (List[str] or str) – Keys for fields that not need to be concatenated. If not passed, the keys in _NON_CONCATENATE_KEYS will be used. This argument will only work when inputs is dict or list of dict. Defaults to None.

  • output_channel_order (str, optional) – The desired image channel order of output the data preprocessor. This is also the desired input channel order of model (and this most likely to be the output order of model). If not passed, no channel order conversion will be performed. Defaults to None.

  • data_keys (List[str] or str) – Keys to preprocess in data samples. Defaults to ‘gt_img’.

  • input_view (tuple, optional) – The view of input tensor. This argument maybe deleted in the future. Defaults to None.

  • output_view (tuple, optional) – The view of output tensor. This argument maybe deleted in the future. Defaults to None.

  • stack_data_sample (bool) – Whether stack a list of data samples to one data sample. Only support with input data samples are DataSamples. Defaults to True.

_NON_IMAGE_KEYS = ['noise']
_NON_CONCATENATE_KEYS = ['num_batches', 'mode', 'sample_kwargs', 'eq_cfg']
cast_data(data: CastData) CastData[source]

Copying data to the target device.

Parameters

data (dict) – Data returned by DataLoader.

Returns

Inputs and data sample at target device.

Return type

CollatedResult

static _parse_channel_index(inputs) int[source]

Parse channel index of inputs.

_parse_channel_order(key: str, inputs: torch.Tensor, data_sample: Optional[mmagic.structures.DataSample] = None) str[source]
_parse_batch_channel_order(key: str, inputs: Sequence, data_samples: Optional[Sequence[mmagic.structures.DataSample]]) str[source]

Parse channel order of inputs in batch.

_update_metainfo(padding_info: torch.Tensor, channel_order_info: Optional[dict] = None, data_samples: Optional[mmagic.utils.typing.SampleList] = None) mmagic.utils.typing.SampleList[source]

Update padding_info and channel_order to metainfo of.

a batch of `data_samples`. For channel order, we consider same field among data samples share the same channel order. Therefore channel_order is passed as a dict, which key and value are field name and corresponding channel order. For padding info, we consider padding info is same among all field of a sample, but can vary between samples. Therefore, we pass padding_info as Tensor shape like (B, 1, 1).

Parameters
  • padding_info (Tensor) – The padding info of each sample. Shape like (B, 1, 1).

  • channel_order (dict, Optional) – The channel order of target field. Key and value are field name and corresponding channel order respectively.

  • data_samples (List[DataSample], optional) – The data samples to be updated. If not passed, will initialize a list of empty data samples. Defaults to None.

Returns

The updated data samples.

Return type

List[DataSample]

_do_conversion(inputs: torch.Tensor, inputs_order: str = 'BGR', target_order: Optional[str] = None) Tuple[torch.Tensor, str][source]

Conduct channel order conversion for a batch of inputs, and return the converted inputs and order after conversion.

inputs_order:
  • RGB / RGB: Convert to target order.

  • SINGLE: Do not change

_do_norm(inputs: torch.Tensor, do_norm: Optional[bool] = None) torch.Tensor[source]
_preprocess_image_tensor(inputs: torch.Tensor, data_samples: Optional[mmagic.utils.typing.SampleList] = None, key: str = 'img') Tuple[torch.Tensor, mmagic.utils.typing.SampleList][source]

Preprocess a batch of image tensor and update metainfo to corresponding data samples.

Parameters
  • inputs (Tensor) – Image tensor with shape (C, H, W), (N, C, H, W) or (N, t, C, H, W) to preprocess.

  • data_samples (List[DataSample], optional) – The data samples of corresponding inputs. If not passed, a list of empty data samples will be initialized to save metainfo. Defaults to None.

  • key (str) – The key of image tensor in data samples. Defaults to ‘img’.

Returns

The preprocessed image tensor

and updated data samples.

Return type

Tuple[Tensor, List[DataSample]]

_preprocess_image_list(tensor_list: List[torch.Tensor], data_samples: Optional[mmagic.utils.typing.SampleList], key: str = 'img') Tuple[torch.Tensor, mmagic.utils.typing.SampleList][source]

Preprocess a list of image tensor and update metainfo to corresponding data samples.

Parameters
  • tensor_list (List[Tensor]) – Image tensor list to be preprocess.

  • data_samples (List[DataSample], optional) – The data samples of corresponding inputs. If not passed, a list of empty data samples will be initialized to save metainfo. Defaults to None.

  • key (str) – The key of tensor list in data samples. Defaults to ‘img’.

Returns

The preprocessed image tensor

and updated data samples.

Return type

Tuple[Tensor, List[DataSample]]

_preprocess_dict_inputs(batch_inputs: dict, data_samples: Optional[mmagic.utils.typing.SampleList] = None) Tuple[dict, mmagic.utils.typing.SampleList][source]

Preprocess dict type inputs.

Parameters
  • batch_inputs (dict) – Input dict.

  • data_samples (List[DataSample], optional) – The data samples of corresponding inputs. If not passed, a list of empty data samples will be initialized to save metainfo. Defaults to None.

Returns

The preprocessed dict and

updated data samples.

Return type

Tuple[dict, List[DataSample]]

_preprocess_data_sample(data_samples: mmagic.utils.typing.SampleList, training: bool) mmagic.structures.DataSample[source]

Preprocess data samples. When training is True, fields belong to self.data_keys will be converted to self.output_channel_order and then normalized by self.mean and self.std. When training is False, fields belongs to self.data_keys will be attempted to convert to ‘BGR’ without normalization. The corresponding metainfo related to normalization, channel order conversion will be updated to data sample as well.

Parameters
  • data_samples (List[DataSample]) – A list of data samples to preprocess.

  • training (bool) – Whether in training mode.

Returns

The list of processed data samples.

Return type

list

forward(data: dict, training: bool = False) dict[source]

Performs normalization、padding and channel order conversion.

Parameters
  • data (dict) – Input data to process.

  • training (bool) – Whether to in training mode. Default: False.

Returns

Data in the same format as the model input.

Return type

dict

destruct(outputs: torch.Tensor, data_samples: Union[mmagic.utils.typing.SampleList, mmagic.structures.DataSample, None] = None, key: str = 'img') Union[list, torch.Tensor][source]

Destruct padding, normalization and convert channel order to BGR if could. If data_samples is a list, outputs will be destructed as a batch of tensor. If data_samples is a DataSample, outputs will be destructed as a single tensor.

Before feed model outputs to visualizer and evaluator, users should call this function for model outputs and inputs.

Use cases:

>>> # destruct model outputs.
>>> # model outputs share the same preprocess information with inputs
>>> # ('img') therefore use 'img' as key
>>> feats = self.forward_tensor(inputs, data_samples, **kwargs)
>>> feats = self.data_preprocessor.destruct(feats, data_samples, 'img')
>>> # destruct model inputs for visualization
>>> for idx, data_sample in enumerate(data_samples):
>>>     destructed_input = self.data_preprocessor.destruct(
>>>         inputs[idx], data_sample, key='img')
>>>     data_sample.set_data({'input': destructed_input})
Parameters
  • outputs (Tensor) – Tensor to destruct.

  • data_samples (Union[SampleList, DataSample], optional) – Data samples (or data sample) corresponding to outputs. Defaults to None

  • key (str) – The key of field in data sample. Defaults to ‘img’.

Returns

Destructed outputs.

Return type

Union[list, Tensor]

_destruct_norm_and_conversion(batch_tensor: torch.Tensor, data_samples: Union[mmagic.utils.typing.SampleList, mmagic.structures.DataSample, None], key: str) torch.Tensor[source]

De-norm and de-convert channel order. Noted that, we de-norm first, and then de-conversion, since mean and std used in normalization is based on channel order after conversion.

Parameters
  • batch_tensor (Tensor) – Tensor to destruct.

  • data_samples (Union[SampleList, DataSample], optional) – Data samples (or data sample) corresponding to outputs.

  • key (str) – The key of field in data sample.

Returns

Destructed tensor.

Return type

Tensor

_destruct_padding(batch_tensor: torch.Tensor, data_samples: Union[mmagic.utils.typing.SampleList, mmagic.structures.DataSample, None], same_padding: bool = True) Union[list, torch.Tensor][source]

Destruct padding of the input tensor.

Parameters
  • batch_tensor (Tensor) – Tensor to destruct.

  • data_samples (Union[SampleList, DataSample], optional) – Data samples (or data sample) corresponding to outputs. If

  • same_padding (bool) – Whether all samples will un-padded with the padding info of the first sample, and return a stacked un-padded tensor. Otherwise each sample will be unpadded with padding info saved in corresponding data samples, and return a list of un-padded tensor, since each un-padded tensor may have the different shape. Defaults to True.

Returns

Destructed outputs.

Return type

Union[list, Tensor]

class mmagic.models.data_preprocessors.MattorPreprocessor(mean: MEAN_STD_TYPE = [123.675, 116.28, 103.53], std: MEAN_STD_TYPE = [58.395, 57.12, 57.375], output_channel_order: str = 'RGB', proc_trimap: str = 'rescale_to_zero_one', stack_data_sample=True)[source]

Bases: mmagic.models.data_preprocessors.data_preprocessor.DataPreprocessor

DataPreprocessor for matting models.

See base class DataPreprocessor for detailed information.

Workflow as follow :

  • Collate and move data to the target device.

  • Convert inputs from bgr to rgb if the shape of input is (3, H, W).

  • Normalize image with defined std and mean.

  • Stack inputs to batch_inputs.

Parameters
  • mean (Sequence[float or int], float or int, optional) – The pixel mean of image channels. Noted that normalization operation is performed after channel order conversion. If it is not specified, images will not be normalized. Defaults None.

  • std (Sequence[float or int], float or int, optional) – The pixel standard deviation of image channels. Noted that normalization operation is performed after channel order conversion. If it is not specified, images will not be normalized. Defaults None.

  • proc_trimap (str) – Methods to process gt tensors. Default: ‘rescale_to_zero_one’. Available options are rescale_to_zero_one and as-is.

  • stack_data_sample (bool) – Whether stack a list of data samples to one data sample. Only support with input data samples are DataSamples. Defaults to True.

_proc_batch_trimap(batch_trimaps: torch.Tensor)[source]
_preprocess_data_sample(data_samples: mmagic.utils.typing.SampleList, training: bool) list[source]

Preprocess data samples. When training is True, fields belong to self.data_keys will be converted to self.output_channel_order and divided by 255. When training is False, fields belongs to self.data_keys will be attempted to convert to ‘BGR’ without normalization. The corresponding metainfo related to normalization, channel order conversion will be updated to data sample as well.

Parameters
  • data_samples (List[DataSample]) – A list of data samples to preprocess.

  • training (bool) – Whether in training mode.

Returns

The list of processed data samples.

Return type

list

forward(data: Sequence[dict], training: bool = False) Tuple[torch.Tensor, list][source]

Pre-process input images, trimaps, ground-truth as configured.

Parameters
  • data (Sequence[dict]) – data sampled from dataloader.

  • training (bool) – Whether to enable training time augmentation. Default: False.

Returns

Batched inputs and list of data samples.

Return type

Tuple[torch.Tensor, list]

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.