mmagic.apis
¶
Package Contents¶
Classes¶
MMagicInferencer API for mmagic models inference. |
Functions¶
|
Initialize a model from config file. |
- mmagic.apis.init_model(config, checkpoint=None, device='cuda:0')[source]¶
Initialize a model from config file.
- Parameters
config (str or
mmengine.Config
) – Config file path or the config object.checkpoint (str, optional) – Checkpoint path. If left as None, the model will not load any weights.
device (str) – Which device the model will deploy. Default: ‘cuda:0’.
- Returns
The constructed model.
- Return type
nn.Module
- class mmagic.apis.MMagicInferencer(model_name: str = None, model_setting: int = None, config_name: int = None, model_config: str = None, model_ckpt: str = None, device: torch.device = None, extra_parameters: Dict = None, seed: int = 2022, **kwargs)[source]¶
MMagicInferencer API for mmagic models inference.
- Parameters
model_name (str) – Name of the editing model.
model_setting (str) – Setting of a specific model. Default to ‘a’.
model_config (str) – Path to the config file for the editing model. Default to None.
model_ckpt (str) – Path to the checkpoint file for the editing model. Default to None.
config_dir (str) – Path to the directory containing config files. Default to ‘configs/’.
device (torch.device) – Device to use for inference. Default to ‘cuda’.
Examples
>>> # inference of a conditional model, biggan for example >>> editor = MMagicInferencer(model_name='biggan') >>> editor.infer(label=1, result_out_dir='./biggan_res.jpg')
>>> # inference of a translation model, pix2pix for example >>> editor = MMagicInferencer(model_name='pix2pix') >>> editor.infer(img='./test.jpg', result_out_dir='./pix2pix_res.jpg')
>>> # see demo/mmediting_inference_tutorial.ipynb for more examples
- inference_supported_models = ['inst_colorization', 'biggan', 'sngan_proj', 'sagan', 'dcgan', 'deblurganv2', 'wgan-gp',...¶
- inference_supported_models_cfg¶
- inference_supported_models_cfg_inited = False¶
- _get_inferencer_kwargs(model_name: Optional[str], model_setting: Optional[int], config_name: Optional[int], model_config: Optional[str], model_ckpt: Optional[str], extra_parameters: Optional[Dict]) Dict [source]¶
Get the kwargs for the inferencer.
- infer(img: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, video: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, label: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, trimap: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, mask: mmagic.apis.inferencers.base_mmagic_inferencer.InputsType = None, result_out_dir: str = '', **kwargs) Union[Dict, List[Dict]] [source]¶
Infer edit model on an image(video).
- Parameters
img (str) – Img path.
video (str) – Video path.
label (int) – Label for conditional or unconditional models.
trimap (str) – Trimap path for matting models.
mask (str) – Mask path for inpainting models.
result_out_dir (str) – Output directory of result image or video. Defaults to ‘’.
- Returns
Each dict contains the inference result of each image or video.
- Return type
Dict or List[Dict]
- get_model_config(model_name: str) Dict [source]¶
Get the model configuration including model config and checkpoint url.
- Parameters
model_name (str) – Name of the model.
- Returns
Model configuration.
- Return type
dict
- static get_inference_supported_models() List [source]¶
static function for getting inference supported modes.