mmagic.models.diffusion_schedulers
¶
Package Contents¶
Classes¶
|
|
- class mmagic.models.diffusion_schedulers.EditDDIMScheduler(num_train_timesteps=1000, beta_start=0.0001, beta_end=0.02, beta_schedule='linear', variance_type='learned_range', timestep_values=None, clip_sample=True, set_alpha_to_one=True)[source]¶
`EditDDIMScheduler`
support the diffusion and reverse process formulated in https://arxiv.org/abs/2010.02502.The code is heavily influenced by https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py. # noqa The difference is that we ensemble gradient-guided sampling in step function.
- Parameters
num_train_timesteps (int, optional) – _description_. Defaults to 1000.
beta_start (float, optional) – _description_. Defaults to 0.0001.
beta_end (float, optional) – _description_. Defaults to 0.02.
beta_schedule (str, optional) – _description_. Defaults to “linear”.
variance_type (str, optional) – _description_. Defaults to ‘learned_range’.
timestep_values (_type_, optional) – _description_. Defaults to None.
clip_sample (bool, optional) – _description_. Defaults to True.
set_alpha_to_one (bool, optional) – _description_. Defaults to True.
- scale_model_input(sample: torch.FloatTensor, timestep: Optional[int] = None) torch.FloatTensor [source]¶
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
- Parameters
sample (torch.FloatTensor) – input sample
timestep (int, optional) – current timestep
- Returns
scaled input sample
- Return type
torch.FloatTensor
- class mmagic.models.diffusion_schedulers.EditDDPMScheduler(num_train_timesteps: int = 1000, beta_start: float = 0.0001, beta_end: float = 0.02, beta_schedule: str = 'linear', trained_betas: Optional[Union[numpy.array, list]] = None, variance_type='fixed_small', clip_sample=True)[source]¶