Shortcuts

mmagic.models.editors.deepfillv2

Package Contents

Classes

DeepFillEncoderDecoder

Two-stage encoder-decoder structure used in DeepFill model.

class mmagic.models.editors.deepfillv2.DeepFillEncoderDecoder(stage1=dict(type='GLEncoderDecoder', encoder=dict(type='DeepFillEncoder'), decoder=dict(type='DeepFillDecoder', in_channels=128), dilation_neck=dict(type='GLDilationNeck', in_channels=128, act_cfg=dict(type='ELU'))), stage2=dict(type='DeepFillRefiner'), return_offset=False)[source]

Bases: mmengine.model.BaseModule

Two-stage encoder-decoder structure used in DeepFill model.

The details are in: Generative Image Inpainting with Contextual Attention

Parameters
  • stage1 (dict) – Config dict for building stage1 model. As DeepFill model uses Global&Local model as baseline in first stage, the stage1 model can be easily built with GLEncoderDecoder.

  • stage2 (dict) – Config dict for building stage2 model.

  • return_offset (bool) – Whether to return offset feature in contextual attention module. Default: False.

forward(x)[source]

Forward function.

Parameters

x (torch.Tensor) – This input tensor has the shape of (n, 5, h, w). In channel dimension, we concatenate [masked_img, ones, mask] as DeepFillv1 models do.

Returns

The first two item is the results from first and second stage. If set return_offset as True, the offset will be returned as the third item.

Return type

tuple[torch.Tensor]

init_weights()[source]

Init weights for models.

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.