Shortcuts

mmagic.models.editors.disco_diffusion.clip_wrapper

Module Contents

Classes

ClipWrapper

Clip Models wrapper.

EmbeddingLayerWithFixes

The revised embedding layer to support external embeddings. This design

class mmagic.models.editors.disco_diffusion.clip_wrapper.ClipWrapper(clip_type, *args, **kwargs)[source]

Bases: torch.nn.Module

Clip Models wrapper.

We provide wrappers for the clip models of openai and mlfoundations, where the user can specify clip_type as clip or open_clip, and then initialize a clip model using the same arguments as in the original codebase. The following clip models settings are provided in the official repo of disco diffusion: | Setting | Source | Arguments | # noqa |:-----------------------------:|———–|--------------------------------------------------------------| # noqa | ViTB32 | clip | name=’ViT-B/32’, jit=False | # noqa | ViTB16 | clip | name=’ViT-B/16’, jit=False | # noqa | ViTL14 | clip | name=’ViT-L/14’, jit=False | # noqa | ViTL14_336px | clip | name=’ViT-L/14@336px’, jit=False | # noqa | RN50 | clip | name=’RN50’, jit=False | # noqa | RN50x4 | clip | name=’RN50x4’, jit=False | # noqa | RN50x16 | clip | name=’RN50x16’, jit=False | # noqa | RN50x64 | clip | name=’RN50x64’, jit=False | # noqa | RN101 | clip | name=’RN101’, jit=False | # noqa | ViTB32_laion2b_e16 | open_clip | name=’ViT-B-32’, pretrained=’laion2b_e16’ | # noqa | ViTB32_laion400m_e31 | open_clip | model_name=’ViT-B-32’, pretrained=’laion400m_e31’ | # noqa | ViTB32_laion400m_32 | open_clip | model_name=’ViT-B-32’, pretrained=’laion400m_e32’ | # noqa | ViTB32quickgelu_laion400m_e31 | open_clip | model_name=’ViT-B-32-quickgelu’, pretrained=’laion400m_e31’ | # noqa | ViTB32quickgelu_laion400m_e32 | open_clip | model_name=’ViT-B-32-quickgelu’, pretrained=’laion400m_e32’ | # noqa | ViTB16_laion400m_e31 | open_clip | model_name=’ViT-B-16’, pretrained=’laion400m_e31’ | # noqa | ViTB16_laion400m_e32 | open_clip | model_name=’ViT-B-16’, pretrained=’laion400m_e32’ | # noqa | RN50_yffcc15m | open_clip | model_name=’RN50’, pretrained=’yfcc15m’ | # noqa | RN50_cc12m | open_clip | model_name=’RN50’, pretrained=’cc12m’ | # noqa | RN50_quickgelu_yfcc15m | open_clip | model_name=’RN50-quickgelu’, pretrained=’yfcc15m’ | # noqa | RN50_quickgelu_cc12m | open_clip | model_name=’RN50-quickgelu’, pretrained=’cc12m’ | # noqa | RN101_yfcc15m | open_clip | model_name=’RN101’, pretrained=’yfcc15m’ | # noqa | RN101_quickgelu_yfcc15m | open_clip | model_name=’RN101-quickgelu’, pretrained=’yfcc15m’ | # noqa

An example of a clip_modes_cfg is as follows:

Examples:

>>> # Use OpenAI's CLIP
>>> config = dict(
>>>     type='ClipWrapper',
>>>     clip_type='clip',
>>>     name='ViT-B/32',
>>>     jit=False)
>>> # Use OpenCLIP
>>> config = dict(
>>>     type='ClipWrapper',
>>>     clip_type='open_clip',
>>>     model_name='RN50',
>>>     pretrained='yfcc15m')
>>> # Use CLIP from Hugging Face Transformers
>>> config = dict(
>>>     type='ClipWrapper',
>>>     clip_type='huggingface',
>>>     pretrained_model_name_or_path='runwayml/stable-diffusion-v1-5',
>>>     subfolder='text_encoder')
Parameters
  • clip_type (List[Dict]) – The original source of the clip model. Whether be clip, open_clip or hugging_face.

  • *args – Arguments to initialize corresponding clip model.

  • **kwargs

    Arguments to initialize corresponding clip model.

get_embedding_layer()[source]

Function to get embedding layer of the clip model.

Only support for CLIPTextModel currently.

add_embedding(embeddings: Union[dict, List[dict]])[source]
set_only_embedding_trainable()[source]
set_embedding_layer()[source]
unset_embedding_layer()[source]
forward(*args, **kwargs)[source]

Forward function.

class mmagic.models.editors.disco_diffusion.clip_wrapper.EmbeddingLayerWithFixes(wrapped: torch.nn.Embedding, external_embeddings: Optional[Union[dict, List[dict]]] = None)[source]

Bases: torch.nn.Module

The revised embedding layer to support external embeddings. This design of this class is inspired by https://github.com/AUTOMATIC1111/stable- diffusion-webui/blob/22bcc7be428c94e9408f589966c2040187245d81/modules/sd_hi jack.py#L224 # noqa.

Parameters
  • wrapped (nn.Emebdding) – The embedding layer to be wrapped.

  • external_embeddings (Union[dict, List[dict]], optional) – The external embeddings added to this layer. Defaults to None.

property weight[source]

Get the weight of wrapped embedding layer.

check_duplicate_names(embeddings: List[dict])[source]

Check whether duplicate names exist in list of ‘external embeddings’.

Parameters

embeddings (List[dict]) – A list of embedding to be check.

check_ids_overlap(embeddings)[source]

Check whether overlap exist in token ids of ‘external_embeddings’.

Parameters

embeddings (List[dict]) – A list of embedding to be check.

add_embeddings(embeddings: Optional[Union[dict, List[dict]]])[source]

Add external embeddings to this layer.

Use case:

>>> 1. Add token to tokenizer and get the token id.
>>> tokenizer = TokenizerWrapper('openai/clip-vit-base-patch32')
>>> # 'how much' in kiswahili
>>> tokenizer.add_placeholder_tokens('ngapi', num_vec_per_token=4)
>>>
>>> 2. Add external embeddings to the model.
>>> new_embedding = {
>>>     'name': 'ngapi',  # 'how much' in kiswahili
>>>     'embedding': torch.ones(1, 15) * 4,
>>>     'start': tokenizer.get_token_info('kwaheri')['start'],
>>>     'end': tokenizer.get_token_info('kwaheri')['end'],
>>>     'trainable': False  # if True, will registry as a parameter
>>> }
>>> embedding_layer = nn.Embedding(10, 15)
>>> embedding_layer_wrapper = EmbeddingLayerWithFixes(embedding_layer)
>>> embedding_layer_wrapper.add_embeddings(new_embedding)
>>>
>>> 3. Forward tokenizer and embedding layer!
>>> input_text = ['hello, ngapi!', 'hello my friend, ngapi?']
>>> input_ids = tokenizer(
>>>     input_text, padding='max_length', truncation=True,
>>>     return_tensors='pt')['input_ids']
>>> out_feat = embedding_layer_wrapper(input_ids)
>>>
>>> 4. Let's validate the result!
>>> assert (out_feat[0, 3: 7] == 2.3).all()
>>> assert (out_feat[2, 5: 9] == 2.3).all()
Parameters

embeddings (Union[dict, list[dict]]) – The external embeddings to be added. Each dict must contain the following 4 fields: ‘name’ (the name of this embedding), ‘embedding’ (the embedding tensor), ‘start’ (the start token id of this embedding), ‘end’ (the end token id of this embedding). For example: {name: NAME, start: START, end: END, embedding: torch.Tensor}

replace_input_ids(input_ids: torch.Tensor) torch.Tensor[source]

Replace external input ids to 0.

Parameters

input_ids (torch.Tensor) – The input ids to be replaced.

Returns

The replaced input ids.

Return type

torch.Tensor

replace_embeddings(input_ids: torch.Tensor, embedding: torch.Tensor, external_embedding: dict) torch.Tensor[source]

Replace external embedding to the embedding layer. Noted that, in this function we use torch.cat to avoid inplace modification.

Parameters
  • input_ids (torch.Tensor) – The original token ids. Shape like [LENGTH, ].

  • embedding (torch.Tensor) – The embedding of token ids after replace_input_ids function.

  • external_embedding (dict) – The external embedding to be replaced.

Returns

The replaced embedding.

Return type

torch.Tensor

forward(input_ids: torch.Tensor, external_embeddings: Optional[List[dict]] = None)[source]

The forward function.

Parameters
  • input_ids (torch.Tensor) – The token ids shape like [bz, LENGTH] or [LENGTH, ].

  • external_embeddings (Optional[List[dict]]) – The external embeddings. If not passed, only self.external_embeddings will be used. Defaults to None.

input_ids: shape like [bz, LENGTH] or [LENGTH].

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.