Shortcuts

mmagic.datasets

Package Contents

Classes

BasicConditionalDataset

Custom dataset for conditional GAN. This class is based on the

BasicFramesDataset

BasicFramesDataset for open source projects in OpenMMLab/MMagic.

BasicImageDataset

BasicImageDataset for open source projects in OpenMMLab/MMagic.

CIFAR10

CIFAR10 Dataset.

AdobeComp1kDataset

Adobe composition-1k dataset.

ControlNetDataset

Demo dataset to test ControlNet. Modified from https://github.com/lllyas

DreamBoothDataset

Dataset for DreamBooth.

GrowScaleImgDataset

Grow Scale Unconditional Image Dataset.

ImageNet

ImageNet Dataset.

MSCoCoDataset

MSCoCo 2014 dataset.

PairedImageDataset

General paired image folder dataset for image generation.

SinGANDataset

SinGAN Dataset.

TextualInversionDataset

Dataset for Textual Inversion and ViCo.

UnpairedImageDataset

General unpaired image folder dataset for image generation.

class mmagic.datasets.BasicConditionalDataset(ann_file: str = '', metainfo: Optional[dict] = None, data_root: str = '', data_prefix: Union[str, dict] = '', extensions: Sequence[str] = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif'), lazy_init: bool = False, classes: Union[str, Sequence[str], None] = None, **kwargs)[源代码]

Bases: mmengine.dataset.BaseDataset

Custom dataset for conditional GAN. This class is based on the combination of BaseDataset (https://github.com/open- mmlab/mmclassification/blob/main/mmcls/datasets/base_dataset.py) # noqa and CustomDataset (https://github.com/open- mmlab/mmclassification/blob/main/mmcls/datasets/custom.py). # noqa.

The dataset supports two kinds of annotation format.

  1. A annotation file read by line (e.g., txt) is provided, and each line indicates a sample:

    The sample files:

    data_prefix/
    ├── folder_1
    │   ├── xxx.png
    │   ├── xxy.png
    │   └── ...
    └── folder_2
        ├── 123.png
        ├── nsdf3.png
        └── ...
    

    The annotation file (the first column is the image path and the second column is the index of category):

    folder_1/xxx.png 0
    folder_1/xxy.png 1
    folder_2/123.png 5
    folder_2/nsdf3.png 3
    ...
    

    Please specify the name of categories by the argument classes or metainfo.

  2. A dict-based annotation file (e.g., json) is provided, key and value indicate the path and label of the sample:

    The sample files:

    data_prefix/
    ├── folder_1
    │   ├── xxx.png
    │   ├── xxy.png
    │   └── ...
    └── folder_2
        ├── 123.png
        ├── nsdf3.png
        └── ...
    

    The annotation file (the key is the image path and the value column is the label):

    {
        "folder_1/xxx.png": [1, 2, 3, 4],
        "folder_1/xxy.png": [2, 4, 1, 0],
        "folder_2/123.png": [0, 9, 8, 1],
        "folder_2/nsdf3.png", [1, 0, 0, 2],
        ...
    }
    

    In this kind of annotation, labels can be any type and not restricted to an index.

  3. The samples are arranged in the specific way:

    data_prefix/
    ├── class_x
    │   ├── xxx.png
    │   ├── xxy.png
    │   └── ...
    │       └── xxz.png
    └── class_y
        ├── 123.png
        ├── nsdf3.png
        ├── ...
        └── asd932_.png
    

If the ann_file is specified, the dataset will be generated by the first two ways, otherwise, try the third way.

参数
  • ann_file (str) – Annotation file path. Defaults to ‘’.

  • metainfo (dict, optional) – Meta information for dataset, such as class information. Defaults to None.

  • data_root (str) – The root directory for data_prefix and ann_file. Defaults to ‘’.

  • data_prefix (str | dict) – Prefix for the data. Defaults to ‘’.

  • extensions (Sequence[str]) – A sequence of allowed extensions. Defaults to (‘.jpg’, ‘.jpeg’, ‘.png’, ‘.ppm’, ‘.bmp’, ‘.pgm’, ‘.tif’).

  • lazy_init (bool) – Whether to load annotation during instantiation. In some cases, such as visualization, only the meta information of the dataset is needed, which is not necessary to load annotation file. Basedataset can skip load annotations to save time by set lazy_init=False. Defaults to False.

  • **kwargs – Other keyword arguments in BaseDataset.

property img_prefix

The prefix of images.

property CLASSES

Return all categories names.

property class_to_idx

Map mapping class name to class index.

返回

mapping from class name to class index.

返回类型

dict

_find_samples(file_backend)

find samples from data_prefix.

load_data_list()

Load image paths and gt_labels.

is_valid_file(filename: str) bool

Check if a file is a valid sample.

get_gt_labels()

Get all ground-truth labels (categories).

返回

categories for all images.

返回类型

np.ndarray

get_cat_ids(idx: int) List[int]

Get category id by index.

参数

idx (int) – Index of data.

返回

Image category of specified index.

返回类型

cat_ids (List[int])

_compat_classes(metainfo, classes)

Merge the old style classes arguments to metainfo.

full_init()

Load annotation file and set BaseDataset._fully_initialized to True.

__repr__()

Print the basic information of the dataset.

返回

Formatted string.

返回类型

str

extra_repr() List[str]

The extra repr information of the dataset.

class mmagic.datasets.BasicFramesDataset(ann_file: str = '', metainfo: Optional[dict] = None, data_root: Optional[str] = None, data_prefix: dict = dict(img=''), pipeline: List[Union[dict, Callable]] = [], test_mode: bool = False, filename_tmpl: dict = dict(), search_key: Optional[str] = None, backend_args: Optional[dict] = None, depth: int = 1, num_input_frames: Optional[int] = None, num_output_frames: Optional[int] = None, fixed_seq_len: Optional[int] = None, load_frames_list: dict = dict(), **kwargs)[源代码]

Bases: mmengine.dataset.BaseDataset

BasicFramesDataset for open source projects in OpenMMLab/MMagic.

This dataset is designed for low-level vision tasks with frames, such as video super-resolution and video frame interpolation.

The annotation file is optional.

If use annotation file, the annotation format can be shown as follows.

Case 1 (Vid4):

    calendar 41
    city 34
    foliage 49
    walk 47

Case 2 (REDS):

    000/00000000.png (720, 1280, 3)
    000/00000001.png (720, 1280, 3)

Case 3 (Vimeo90k):

    00001/0266 (256, 448, 3)
    00001/0268 (256, 448, 3)
参数
  • ann_file (str) – Annotation file path. Defaults to ‘’.

  • metainfo (dict, optional) – Meta information for dataset, such as class information. Defaults to None.

  • data_root (str, optional) – The root directory for data_prefix and ann_file. Defaults to None.

  • data_prefix (dict, optional) – Prefix for training data. Defaults to dict(img=’’, gt=’’).

  • pipeline (list, optional) – Processing pipeline. Defaults to [].

  • test_mode (bool, optional) – test_mode=True means in test phase. Defaults to False.

  • filename_tmpl (str) – Template for each filename. Note that the template excludes the file extension. Default: ‘{}’.

  • search_key (str) – The key used for searching the folder to get data_list. Default: ‘gt’.

  • backend_args (dict, optional) – Arguments to instantiate the prefix of uri corresponding backend. Defaults to None.

  • depth (int) – The depth of path. Default: 1

  • num_input_frames (None | int) – Number of input frames. Default: None.

  • num_output_frames (None | int) – Number of output frames. Default: None.

  • fixed_seq_len (None | int) – The fixed sequence length. If None, BasicFramesDataset will obtain the length of each sequence. Default: None.

  • load_frames_list (dict) – Load frames list for each key. Default: dict().

实际案例

Assume the file structure as the following:

mmagic (root) ├── mmagic ├── tools ├── configs ├── data │ ├── Vid4 │ │ ├── BIx4 │ │ │ ├── city │ │ │ │ ├── img1.png │ │ ├── GT │ │ │ ├── city │ │ │ │ ├── img1.png │ │ ├── meta_info_Vid4_GT.txt │ ├── places │ │ ├── sequences | | | ├── 00001 │ │ │ │ ├── 0389 │ │ │ │ │ ├── img1.png │ │ │ │ │ ├── img2.png │ │ │ │ │ ├── img3.png │ │ ├── tri_trainlist.txt

Case 1: Loading Vid4 dataset for training a VSR model.

dataset = BasicFramesDataset(
    ann_file='meta_info_Vid4_GT.txt',
    metainfo=dict(dataset_type='vid4', task_name='vsr'),
    data_root='data/Vid4',
    data_prefix=dict(img='BIx4', gt='GT'),
    pipeline=[],
    depth=2,
    num_input_frames=5)

Case 2: Loading Vimeo90k dataset for training a VFI model.

dataset = BasicFramesDataset(
    ann_file='tri_trainlist.txt',
    metainfo=dict(dataset_type='vimeo90k', task_name='vfi'),
    data_root='data/vimeo-triplet',
    data_prefix=dict(img='sequences', gt='sequences'),
    pipeline=[],
    depth=2,
    load_frames_list=dict(
        img=['img1.png', 'img3.png'], gt=['img2.png']))
See more details in unittest
tests/test_datasets/test_base_frames_dataset.py

TestFramesDatasets().test_version_1_method()

METAINFO
load_data_list() List[dict]

Load data list from folder or annotation file.

返回

A list of annotation.

返回类型

list[dict]

_get_path_list()

Get list of paths from annotation file or folder of dataset.

返回

A list of paths.

返回类型

list[str]

_get_path_list_from_ann()

Get list of paths from annotation file.

返回

A list of paths.

返回类型

list[str]

_get_path_list_from_folder(sub_folder=None, need_ext=True, depth=1)

Get list of paths from folder.

参数
  • sub_folder (None | str) – The path of sub_folder. Default: None.

  • need_ext (bool) – Whether need ext. Default: True.

  • depth (int) – Residual depth of path, recursively called to depth == 1. Default: 1

返回

A list of paths.

返回类型

list[str]

_set_seq_lens()

Get sequence lengths.

_get_frames_list(key, folder)

Obtain list of frames.

参数
  • key (str) – The key of frames list, e.g. img, gt.

  • folder (str) – Folder of frames.

返回

The paths list of frames.

返回类型

list[str]

class mmagic.datasets.BasicImageDataset(ann_file: str = '', metainfo: Optional[dict] = None, data_root: Optional[str] = None, data_prefix: dict = dict(img=''), pipeline: List[Union[dict, Callable]] = [], test_mode: bool = False, filename_tmpl: dict = dict(), search_key: Optional[str] = None, backend_args: Optional[dict] = None, img_suffix: Optional[Union[str, Tuple[str]]] = IMG_EXTENSIONS, recursive: bool = False, **kwards)[源代码]

Bases: mmengine.dataset.BaseDataset

BasicImageDataset for open source projects in OpenMMLab/MMagic.

This dataset is designed for low-level vision tasks with image, such as super-resolution and inpainting.

The annotation file is optional.

If use annotation file, the annotation format can be shown as follows.

Case 1 (CelebA-HQ):

    000001.png
    000002.png

Case 2 (DIV2K):

    0001_s001.png (480,480,3)
    0001_s002.png (480,480,3)
    0001_s003.png (480,480,3)
    0002_s001.png (480,480,3)
    0002_s002.png (480,480,3)

Case 3 (Vimeo90k):

    00001/0266 (256, 448, 3)
    00001/0268 (256, 448, 3)
参数
  • ann_file (str) – Annotation file path. Defaults to ‘’.

  • metainfo (dict, optional) – Meta information for dataset, such as class information. Defaults to None.

  • data_root (str, optional) – The root directory for data_prefix and ann_file. Defaults to None.

  • data_prefix (dict, optional) – Prefix for training data. Defaults to dict(img=None, ann=None).

  • pipeline (list, optional) – Processing pipeline. Defaults to [].

  • test_mode (bool, optional) – test_mode=True means in test phase. Defaults to False.

  • filename_tmpl (dict) – Template for each filename. Note that the template excludes the file extension. Default: dict().

  • search_key (str) – The key used for searching the folder to get data_list. Default: ‘gt’.

  • backend_args (dict, optional) – Arguments to instantiate the prefix of uri corresponding backend. Defaults to None.

  • suffix (str or tuple[str], optional) – File suffix that we are interested in. Default: None.

  • recursive (bool) – If set to True, recursively scan the directory. Default: False.

备注

Assume the file structure as the following:

mmagic (root)
├── mmagic
├── tools
├── configs
├── data
│   ├── DIV2K
│   │   ├── DIV2K_train_HR
│   │   │   ├── image.png
│   │   ├── DIV2K_train_LR_bicubic
│   │   │   ├── X2
│   │   │   ├── X3
│   │   │   ├── X4
│   │   │   │   ├── image_x4.png
│   │   ├── DIV2K_valid_HR
│   │   ├── DIV2K_valid_LR_bicubic
│   │   │   ├── X2
│   │   │   ├── X3
│   │   │   ├── X4
│   ├── places
│   │   ├── test_set
│   │   ├── train_set
|   |   ├── meta
|   |   |    ├── Places365_train.txt
|   |   |    ├── Places365_val.txt

实际案例

Case 1: Loading DIV2K dataset for training a SISR model.

dataset = BasicImageDataset(
    ann_file='',
    metainfo=dict(
        dataset_type='div2k',
        task_name='sisr'),
    data_root='data/DIV2K',
    data_prefix=dict(
        gt='DIV2K_train_HR', img='DIV2K_train_LR_bicubic/X4'),
    filename_tmpl=dict(img='{}_x4', gt='{}'),
    pipeline=[])

Case 2: Loading places dataset for training an inpainting model.

dataset = BasicImageDataset(
    ann_file='meta/Places365_train.txt',
    metainfo=dict(
        dataset_type='places365',
        task_name='inpainting'),
    data_root='data/places',
    data_prefix=dict(gt='train_set'),
    pipeline=[])
METAINFO
load_data_list() List[dict]

Load data list from folder or annotation file.

返回

A list of annotation.

返回类型

list[dict]

_get_path_list()

Get list of paths from annotation file or folder of dataset.

返回

A list of paths.

返回类型

list[dict]

_get_path_list_from_ann()

Get list of paths from annotation file.

返回

List of paths.

返回类型

List

_get_path_list_from_folder()

Get list of paths from folder.

返回

List of paths.

返回类型

List

class mmagic.datasets.CIFAR10(data_prefix: str, test_mode: bool, metainfo: Optional[dict] = None, data_root: str = '', download: bool = True, **kwargs)[源代码]

Bases: mmagic.datasets.basic_conditional_dataset.BasicConditionalDataset

CIFAR10 Dataset.

This implementation is modified from https://github.com/pytorch/vision/blob/master/torchvision/datasets/cifar.py

参数
  • data_prefix (str) – Prefix for data.

  • test_mode (bool) – test_mode=True means in test phase. It determines to use the training set or test set.

  • metainfo (dict, optional) – Meta information for dataset, such as categories information. Defaults to None.

  • data_root (str) – The root directory for data_prefix. Defaults to ‘’.

  • download (bool) – Whether to download the dataset if not exists. Defaults to True.

  • **kwargs – Other keyword arguments in BaseDataset.

base_folder = 'cifar-10-batches-py'
url = 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
filename = 'cifar-10-python.tar.gz'
tgz_md5 = 'c58f30108f718f92721af3b95e74349a'
train_list = [['data_batch_1', 'c99cafc152244af753f735de768cd75f'], ['data_batch_2',...
test_list = [['test_batch', '40351d587109b95175f43aff81a1287e']]
meta
METAINFO
load_data_list()

Load images and ground truth labels.

_load_meta()

Load categories information from metafile.

_check_integrity()

Check the integrity of data files.

extra_repr() List[str]

The extra repr information of the dataset.

class mmagic.datasets.AdobeComp1kDataset(ann_file: Optional[str] = '', metainfo: Union[collections.abc.Mapping, mmengine.config.Config, None] = None, data_root: Optional[str] = '', data_prefix: dict = dict(img_path=''), filter_cfg: Optional[dict] = None, indices: Optional[Union[int, Sequence[int]]] = None, serialize_data: bool = True, pipeline: List[Union[dict, Callable]] = [], test_mode: bool = False, lazy_init: bool = False, max_refetch: int = 1000)[源代码]

Bases: mmengine.dataset.BaseDataset

Adobe composition-1k dataset.

The dataset loads (alpha, fg, bg) data and apply specified transforms to the data. You could specify whether composite merged image online or load composited merged image in pipeline.

Example for online comp-1k dataset:

[
    {
        "alpha": 'alpha/000.png',
        "fg": 'fg/000.png',
        "bg": 'bg/000.png'
    },
    {
        "alpha": 'alpha/001.png',
        "fg": 'fg/001.png',
        "bg": 'bg/001.png'
    },
]

Example for offline comp-1k dataset:

[
    {
        "alpha": 'alpha/000.png',
        "merged": 'merged/000.png',
        "fg": 'fg/000.png',
        "bg": 'bg/000.png'
    },
    {
        "alpha": 'alpha/001.png',
        "merged": 'merged/001.png',
        "fg": 'fg/001.png',
        "bg": 'bg/001.png'
    },
]
参数
  • ann_file (str) – Annotation file path. Defaults to ‘’.

  • data_root (str, optional) – The root directory for data_prefix and ann_file. Defaults to None.

  • pipeline (list, optional) – Processing pipeline. Defaults to [].

  • test_mode (bool, optional) – test_mode=True means in test phase. Defaults to False.

  • **kwargs – Other arguments passed to mmengine.dataset.BaseDataset.

实际案例

See unit-tests TODO: Move some codes in unittest here

METAINFO
load_data_list() List[dict]

Load annotations from an annotation file named as self.ann_file

In order to be compatible to both new and old annotation format, we copy implementations from mmengine and do some modifications.

返回

A list of annotation.

返回类型

list[dict]

parse_data_info(raw_data_info: dict) Union[dict, List[dict]]

Join data_root to each path in data_info.

class mmagic.datasets.ControlNetDataset(ann_file: str = 'prompt.json', data_root: str = './data/fill50k', control_key='source', image_key='target', pipeline: List[Union[dict, Callable]] = [])[源代码]

Bases: mmengine.dataset.BaseDataset

Demo dataset to test ControlNet. Modified from https://github.com/lllyas viel/ControlNet/blob/16ea3b5379c1e78a4bc8e3fc9cae8d65c42511b1/tutorial_data set.py # noqa.

You can download the demo data from https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip # noqa and then unzip the file to the data folder.

参数
  • ann_file (str) – Path to the annotation file. Defaults to ‘prompt.json’ as ControlNet’s default.

  • data_root (str) – Path to the data root. Defaults to ‘./data/fill50k’.

  • pipeline (list[dict | callable]) – A sequence of data transforms.

load_data_list() List[dict]

Load annotations from an annotation file named as self.ann_file

返回

A list of annotation.

返回类型

list[dict]

class mmagic.datasets.DreamBoothDataset(data_root: str, concept_dir: str, prompt: str, pipeline: List[Union[dict, Callable]] = [])[源代码]

Bases: mmengine.dataset.BaseDataset

Dataset for DreamBooth.

参数
  • data_root (str) – Path to the data root.

  • concept_dir (str) – Path to the concept images.

  • prompt (str) – Prompt of the concept.

  • pipeline (list[dict | callable]) – A sequence of data transforms.

load_data_list() list

Load data list from concept_dir and class_dir.

class mmagic.datasets.GrowScaleImgDataset(data_roots: dict, pipeline, len_per_stage=int(1000000.0), gpu_samples_per_scale=None, gpu_samples_base=32, io_backend: Optional[str] = None, file_lists: Optional[Union[str, dict]] = None, test_mode=False)[源代码]

Bases: mmengine.dataset.BaseDataset

Grow Scale Unconditional Image Dataset.

This dataset is similar with UnconditionalImageDataset, but offer more dynamic functionalities for the supporting complex algorithms, like PGGAN.

Highlight functionalities:

  1. Support growing scale dataset. The motivation is to decrease data pre-processing load in CPU. In this dataset, you can provide imgs_roots like:

    {'64': 'path_to_64x64_imgs',
     '512': 'path_to_512x512_imgs'}
    

    Then, in training scales lower than 64x64, this dataset will set self.imgs_root as ‘path_to_64x64_imgs’;

  2. Offer samples_per_gpu according to different scales. In this dataset, self.samples_per_gpu will help runner to know the updated batch size.

Basically, This dataset contains raw images for training unconditional GANs. Given a root dir, we will recursively find all images in this root. The transformation on data is defined by the pipeline.

参数
  • imgs_root (str) – Root path for unconditional images.

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • len_per_stage (int, optional) – The length of dataset for each scale. This args change the length dataset by concatenating or extracting subset. If given a value less than 0., the original length will be kept. Defaults to 1e6.

  • gpu_samples_per_scale (dict | None, optional) – Dict contains samples_per_gpu for each scale. For example, {'32': 4} will set the scale of 32 with samples_per_gpu=4, despite other scale with samples_per_gpu=self.gpu_samples_base.

  • gpu_samples_base (int, optional) – Set default samples_per_gpu for each scale. Defaults to 32.

  • io_backend (str, optional) – The storage backend type. Options are “disk”, “ceph”, “memcached”, “lmdb”, “http” and “petrel”. Default: None.

  • test_mode (bool, optional) – If True, the dataset will work in test mode. Otherwise, in train mode. Default to False.

_VALID_IMG_SUFFIX = ('.jpg', '.png', '.jpeg', '.JPEG')
load_data_list()

Load annotations.

update_annotations(curr_scale)

Update annotations.

参数

curr_scale (int) – Current image scale.

返回

Whether to update.

返回类型

bool

concat_imgs_list_to(num)

Concat image list to specified length.

参数

num (int) – The length of the concatenated image list.

prepare_train_data(idx)

Prepare training data.

参数

idx (int) – Index of current batch.

返回

Prepared training data batch.

返回类型

dict

prepare_test_data(idx)

Prepare testing data.

参数

idx (int) – Index of current batch.

返回

Prepared training data batch.

返回类型

dict

__getitem__(idx)

Get the idx-th image and data information of dataset after self.pipeline, and full_init will be called if the dataset has not been fully initialized.

During training phase, if self.pipeline get None, self._rand_another will be called until a valid image is fetched or

the maximum limit of refetch is reached.

参数

idx (int) – The index of self.data_list.

返回

The idx-th image and data information of dataset after self.pipeline.

返回类型

dict

__repr__()

Print self.transforms in sequence.

返回

Formatted string.

返回类型

str

class mmagic.datasets.ImageNet(ann_file: str = '', metainfo: Optional[dict] = None, data_root: str = '', data_prefix: Union[str, dict] = '', **kwargs)[源代码]

Bases: mmagic.datasets.basic_conditional_dataset.BasicConditionalDataset

ImageNet Dataset.

The dataset supports two kinds of annotation format. More details can be found in CustomDataset.

参数
  • ann_file (str) – Annotation file path. Defaults to ‘’.

  • metainfo (dict, optional) – Meta information for dataset, such as class information. Defaults to None.

  • data_root (str) – The root directory for data_prefix and ann_file. Defaults to ‘’.

  • data_prefix (str | dict) – Prefix for training data. Defaults to ‘’.

  • **kwargs – Other keyword arguments in CustomDataset and BaseDataset.

IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif')
METAINFO
class mmagic.datasets.MSCoCoDataset(ann_file: str = '', metainfo: Optional[dict] = None, data_root: str = '', drop_caption_rate=0.0, phase='train', year=2014, data_prefix: Union[str, dict] = '', extensions: Sequence[str] = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif'), lazy_init: bool = False, classes: Union[str, Sequence[str], None] = None, caption_style: str = '', **kwargs)[源代码]

Bases: mmagic.datasets.basic_conditional_dataset.BasicConditionalDataset

MSCoCo 2014 dataset.

参数
  • ann_file (str) – Annotation file path. Defaults to ‘’.

  • metainfo (dict, optional) – Meta information for dataset, such as class information. Defaults to None.

  • data_root (str) – The root directory for data_prefix and ann_file. Defaults to ‘’.

  • drop_caption_rate (float, optional) – Rate of dropping caption, used for training. Defaults to 0.0.

  • phase (str, optional) – Subdataset used for certain phase, can be set to train, test and val. Defaults to ‘train’.

  • year (int, optional) – Version of CoCo dataset, can be set to 2014 and 2017. Defaults to 2014.

  • data_prefix (str | dict) – Prefix for the data. Defaults to ‘’.

  • extensions (Sequence[str]) – A sequence of allowed extensions. Defaults to (‘.jpg’, ‘.jpeg’, ‘.png’, ‘.ppm’, ‘.bmp’, ‘.pgm’, ‘.tif’).

  • lazy_init (bool) – Whether to load annotation during instantiation. In some cases, such as visualization, only the meta information of the dataset is needed, which is not necessary to load annotation file. Basedataset can skip load annotations to save time by set lazy_init=False. Defaults to False.

  • caption_style (str) – If you want to add a style description for each caption, you can set caption_style to your style prompt. For example, ‘realistic style’. Defaults to empty str.

  • **kwargs – Other keyword arguments in BaseDataset.

METAINFO
load_data_list()

Load image paths and gt_labels.

class mmagic.datasets.PairedImageDataset(data_root, pipeline, io_backend: Optional[str] = None, test_mode=False, test_dir='test')[源代码]

Bases: mmengine.dataset.BaseDataset

General paired image folder dataset for image generation.

It assumes that the training directory is ‘/path/to/data/train’. During test time, the directory is ‘/path/to/data/test’. ‘/path/to/data’ can be initialized by args ‘dataroot’. Each sample contains a pair of images concatenated in the w dimension (A|B).

参数
  • dataroot (str | Path) – Path to the folder root of paired images.

  • pipeline (List[dict | callable]) – A sequence of data transformations.

  • test_mode (bool) – Store True when building test dataset. Default: False.

  • test_dir (str) – Subfolder of dataroot which contain test images. Default: ‘test’.

load_data_list()

Load paired image paths.

返回

List that contains paired image paths.

返回类型

list[dict]

scan_folder(path)

Obtain image path list (including sub-folders) from a given folder.

参数

path (str | Path) – Folder path.

返回

Image list obtained from the given folder.

返回类型

list[str]

class mmagic.datasets.SinGANDataset(data_root, min_size, max_size, scale_factor_init, pipeline, num_samples=- 1)[源代码]

Bases: mmengine.dataset.BaseDataset

SinGAN Dataset.

In this dataset, we create an image pyramid and save it in the cache.

参数
  • img_path (str) – Path to the single image file.

  • min_size (int) – Min size of the image pyramid. Here, the number will be set to the min(H, W).

  • max_size (int) – Max size of the image pyramid. Here, the number will be set to the max(H, W).

  • scale_factor_init (float) – Rescale factor. Note that the actual factor we use may be a little bit different from this value.

  • num_samples (int, optional) – The number of samples (length) in this dataset. Defaults to -1.

full_init()

Skip the full init process for SinGANDataset.

load_data_list(min_size, max_size, scale_factor_init)

Load annotations for SinGAN Dataset.

参数
  • min_size (int) – The minimum size for the image pyramid.

  • max_size (int) – The maximum size for the image pyramid.

  • scale_factor_init (float) – The initial scale factor.

__getitem__(index)

Get :attr:self.data_dict. For SinGAN, we use single image with different resolution to train the model.

参数

idx (int) – This will be ignored in SinGANDataset.

返回

Dict contains input image in different resolution. self.pipeline.

返回类型

dict

__len__()

Get the length of filtered dataset and automatically call full_init if the dataset has not been fully init.

返回

The length of filtered dataset.

返回类型

int

class mmagic.datasets.TextualInversionDataset(data_root: str, concept_dir: str, placeholder: str, template: str, with_image_reference: bool = False, pipeline: List[Union[dict, Callable]] = [])[源代码]

Bases: mmengine.dataset.BaseDataset

Dataset for Textual Inversion and ViCo.

参数
  • data_root (str) – Path to the data root.

  • concept_dir (str) – Path to the concept images.

  • placeholder (str) – A string to denote the concept.

  • template (list[str]) – A list of strings like ‘A photo of {}’.

  • with_image_reference (bool) – Is used for vico training.

  • pipeline (list[dict | callable]) – A sequence of data transforms.

load_data_list() list

Load data list from concept_dir and class_dir.

prepare_data(idx)

Get data processed by self.pipeline.

参数

idx (int) – The index of data_info.

返回

Depends on self.pipeline.

返回类型

Any

class mmagic.datasets.UnpairedImageDataset(data_root, pipeline, io_backend: Optional[str] = None, test_mode=False, domain_a='A', domain_b='B')[源代码]

Bases: mmengine.dataset.BaseDataset

General unpaired image folder dataset for image generation.

It assumes that the training directory of images from domain A is ‘/path/to/data/trainA’, and that from domain B is ‘/path/to/data/trainB’, respectively. ‘/path/to/data’ can be initialized by args ‘dataroot’. During test time, the directory is ‘/path/to/data/testA’ and ‘/path/to/data/testB’, respectively.

参数
  • dataroot (str | Path) – Path to the folder root of unpaired images.

  • pipeline (List[dict | callable]) – A sequence of data transformations.

  • io_backend (str, optional) – The storage backend type. Options are “disk”, “ceph”, “memcached”, “lmdb”, “http” and “petrel”. Default: None.

  • test_mode (bool) – Store True when building test dataset. Default: False.

  • domain_a (str, optional) – Domain of images in trainA / testA. Defaults to ‘A’.

  • domain_b (str, optional) – Domain of images in trainB / testB. Defaults to ‘B’.

load_data_list()

Load the data list.

返回

The data info list of source and target domain.

返回类型

list

_load_domain_data_list(dataroot)

Load unpaired image paths of one domain.

参数

dataroot (str) – Path to the folder root for unpaired images of one domain.

返回

List that contains unpaired image paths of one domain.

返回类型

list[dict]

get_data_info(idx) dict

Get annotation by index and automatically call full_init if the dataset has not been fully initialized.

参数

idx (int) – The index of data.

返回

The idx-th annotation of the dataset.

返回类型

dict

__len__()

The length of the dataset.

scan_folder(path)

Obtain image path list (including sub-folders) from a given folder.

参数

path (str | Path) – Folder path.

返回

Image list obtained from the given folder.

返回类型

list[str]

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.