mmagic.models.diffusion_schedulers
¶
Package Contents¶
Classes¶
|
|
- class mmagic.models.diffusion_schedulers.EditDDIMScheduler(num_train_timesteps=1000, beta_start=0.0001, beta_end=0.02, beta_schedule='linear', variance_type='learned_range', timestep_values=None, clip_sample=True, set_alpha_to_one=True)[源代码]¶
`EditDDIMScheduler`
support the diffusion and reverse process formulated in https://arxiv.org/abs/2010.02502.The code is heavily influenced by https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py. # noqa The difference is that we ensemble gradient-guided sampling in step function.
- 参数
num_train_timesteps (int, optional) – _description_. Defaults to 1000.
beta_start (float, optional) – _description_. Defaults to 0.0001.
beta_end (float, optional) – _description_. Defaults to 0.02.
beta_schedule (str, optional) – _description_. Defaults to “linear”.
variance_type (str, optional) – _description_. Defaults to ‘learned_range’.
timestep_values (_type_, optional) – _description_. Defaults to None.
clip_sample (bool, optional) – _description_. Defaults to True.
set_alpha_to_one (bool, optional) – _description_. Defaults to True.
- set_timesteps(num_inference_steps, offset=0)¶
set time steps.
- scale_model_input(sample: torch.FloatTensor, timestep: Optional[int] = None) torch.FloatTensor ¶
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
- 参数
sample (torch.FloatTensor) – input sample
timestep (int, optional) – current timestep
- 返回
scaled input sample
- 返回类型
torch.FloatTensor
- _get_variance(timestep, prev_timestep)¶
get variance.
- step(model_output: Union[torch.FloatTensor, numpy.ndarray], timestep: int, sample: Union[torch.FloatTensor, numpy.ndarray], cond_fn=None, cond_kwargs={}, eta: float = 0.0, use_clipped_model_output: bool = False, generator=None)¶
step forward.
- add_noise(original_samples, noise, timesteps)¶
add noise.
- __len__()¶
- class mmagic.models.diffusion_schedulers.EditDDPMScheduler(num_train_timesteps: int = 1000, beta_start: float = 0.0001, beta_end: float = 0.02, beta_schedule: str = 'linear', trained_betas: Optional[Union[numpy.array, list]] = None, variance_type='fixed_small', clip_sample=True)[源代码]¶
- set_timesteps(num_inference_steps)¶
set timesteps.
- _get_variance(t, predicted_variance=None, variance_type=None)¶
get variance.
- step(model_output: torch.FloatTensor, timestep: int, sample: torch.FloatTensor, predict_epsilon=True, cond_fn=None, cond_kwargs={}, generator=None)¶
- add_noise(original_samples, noise, timesteps)¶
add noise.
- abstract training_loss(model, x_0, t)¶
- abstract sample_timestep()¶
- __len__()¶