Shortcuts

mmagic.apis.mmagic_inferencer

Module Contents

Classes

MMagicInferencer

MMagicInferencer API for mmagic models inference.

class mmagic.apis.mmagic_inferencer.MMagicInferencer(model_name: str = None, model_setting: int = None, config_name: int = None, model_config: str = None, model_ckpt: str = None, device: torch.device = None, extra_parameters: Dict = None, seed: int = 2022, **kwargs)[source]

MMagicInferencer API for mmagic models inference.

Parameters
  • model_name (str) – Name of the editing model.

  • model_setting (str) – Setting of a specific model. Default to ‘a’.

  • model_config (str) – Path to the config file for the editing model. Default to None.

  • model_ckpt (str) – Path to the checkpoint file for the editing model. Default to None.

  • config_dir (str) – Path to the directory containing config files. Default to ‘configs/’.

  • device (torch.device) – Device to use for inference. Default to ‘cuda’.

Examples

>>> # inference of a conditional model, biggan for example
>>> editor = MMagicInferencer(model_name='biggan')
>>> editor.infer(label=1, result_out_dir='./biggan_res.jpg')
>>> # inference of a translation model, pix2pix for example
>>> editor = MMagicInferencer(model_name='pix2pix')
>>> editor.infer(img='./test.jpg', result_out_dir='./pix2pix_res.jpg')
>>> # see demo/mmediting_inference_tutorial.ipynb for more examples
inference_supported_models = ['inst_colorization', 'biggan', 'sngan_proj', 'sagan', 'dcgan', 'deblurganv2', 'wgan-gp',...[source]
inference_supported_models_cfg[source]
inference_supported_models_cfg_inited = False[source]
_get_inferencer_kwargs(model_name: Optional[str], model_setting: Optional[int], config_name: Optional[int], model_config: Optional[str], model_ckpt: Optional[str], extra_parameters: Optional[Dict]) Dict[source]

Get the kwargs for the inferencer.

print_extra_parameters()[source]

Print the unique parameters of each kind of inferencer.

infer(img: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, video: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, label: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, trimap: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, mask: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, result_out_dir: str = '', **kwargs) Union[Dict, List[Dict]][source]

Infer edit model on an image(video).

Parameters
  • img (str) – Img path.

  • video (str) – Video path.

  • label (int) – Label for conditional or unconditional models.

  • trimap (str) – Trimap path for matting models.

  • mask (str) – Mask path for inpainting models.

  • result_out_dir (str) – Output directory of result image or video. Defaults to ‘’.

Returns

Each dict contains the inference result of each image or video.

Return type

Dict or List[Dict]

get_model_config(model_name: str) Dict[source]

Get the model configuration including model config and checkpoint url.

Parameters

model_name (str) – Name of the model.

Returns

Model configuration.

Return type

dict

static init_inference_supported_models_cfg() None[source]
static get_inference_supported_models() List[source]

static function for getting inference supported modes.

static get_inference_supported_tasks() List[source]

static function for getting inference supported tasks.

static get_task_supported_models(task: str) List[source]

static function for getting task supported models.

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.