mmagic.models.utils.tome_utils
¶
Module Contents¶
Functions¶
|
Add a forward pre hook to get the image size. This hook can be removed |
|
Make a patched class for a DiffusersWrapper model in mmagic. This patch |
|
Make a patched class for a mmagic StableDiffusion model. This patch |
|
Checks whether x has any class named cls_name in its ancestry. |
|
Build identical mapping function. |
|
Gather function specific for mps backend (Metal Performance Shaders). |
|
Partitions the tokens into src and dst and merges r tokens from src to |
|
Build the merge and unmerge functions for a given setting from |
- mmagic.models.utils.tome_utils.add_tome_cfg_hook(model: torch.nn.Module)[source]¶
Add a forward pre hook to get the image size. This hook can be removed with remove_patch.
Source: https://github.com/dbolya/tomesd/blob/main/tomesd/patch.py#L158 # noqa
- mmagic.models.utils.tome_utils.build_mmagic_wrapper_tomesd_block(block_class: Type[torch.nn.Module]) Type[torch.nn.Module] [source]¶
Make a patched class for a DiffusersWrapper model in mmagic. This patch applies ToMe to the forward function of the block.
Refer to: https://github.com/dbolya/tomesd/blob/main/tomesd/patch.py#L67 # noqa :param block_class: original class need tome speedup. :type block_class: torch.nn.Module
- Returns
patched class based on the original class.
- Return type
ToMeBlock (torch.nn.Module)
- mmagic.models.utils.tome_utils.build_mmagic_tomesd_block(block_class: Type[torch.nn.Module]) Type[torch.nn.Module] [source]¶
Make a patched class for a mmagic StableDiffusion model. This patch applies ToMe to the forward function of the block.
Refer to: https://github.com/dbolya/tomesd/blob/main/tomesd/patch.py#L67 # noqa :param block_class: original class need tome speedup. :type block_class: torch.nn.Module
- Returns
patched class based on the original class.
- Return type
ToMeBlock (torch.nn.Module)
- mmagic.models.utils.tome_utils.isinstance_str(x: object, cls_name: str)[source]¶
Checks whether x has any class named cls_name in its ancestry. Doesn’t require access to the class’s implementation.
Source: https://github.com/dbolya/tomesd/blob/main/tomesd/utils.py#L3 # noqa
- mmagic.models.utils.tome_utils.do_nothing(x: torch.Tensor, mode: str = None)[source]¶
Build identical mapping function.
Source: https://github.com/dbolya/tomesd/blob/main/tomesd/merge.py#L5 # noqa
- mmagic.models.utils.tome_utils.mps_gather_workaround(input, dim, index)[source]¶
Gather function specific for mps backend (Metal Performance Shaders).
Source: https://github.com/dbolya/tomesd/blob/main/tomesd/merge.py#L9 # noqa
- mmagic.models.utils.tome_utils.bipartite_soft_matching_random2d(metric: torch.Tensor, w: int, h: int, sx: int, sy: int, r: int, no_rand: bool = False) Tuple[Callable, Callable] [source]¶
Partitions the tokens into src and dst and merges r tokens from src to dst, dst tokens are partitioned by choosing one randomy in each (sx, sy) region. More details refer to Token Merging: Your ViT But Faster, paper link: <https://arxiv.org/abs/2210.09461>`_ # noqa.
Source: https://github.com/dbolya/tomesd/blob/main/tomesd/merge.py#20 # noqa
- Parameters
metric (torch.Tensor) – metric with size (B, N, C) for similarity computation.
w (int) – image width in tokens.
h (int) – image height in tokens.
sx (int) – stride in the x dimension for dst, must divide w.
sy (int) – stride in the y dimension for dst, must divide h.
r (int) – number of tokens to remove (by merging).
no_rand (bool) – if true, disable randomness (use top left corner only).
- Returns
token merging function. unmerge (Callable): token unmerging function.
- Return type
merge (Callable)
- mmagic.models.utils.tome_utils.build_merge(x: torch.Tensor, tome_info: Dict[str, Any]) Tuple[Callable, Ellipsis] [source]¶
Build the merge and unmerge functions for a given setting from tome_info.
Source: https://github.com/dbolya/tomesd/blob/main/tomesd/patch.py#L10 # noqa