Migration of Distributed Training Settings¶
We have merged MMGeneration 1.x into MMagic. Here is migration of Distributed Training Settings about MMGeneration.
In 0.x version, MMGeneration uses DDPWrapper
and DynamicRunner
to train static and dynamic model (e.g., PGGAN and StyleGANv2) respectively. In 1.x version, we use MMSeparateDistributedDataParallel
provided by MMEngine to implement distributed training.
The configuration differences are shown below:
Static Model in 0.x Version | Static Model in 1.x Version |
---|---|
# Use DDPWrapper
use_ddp_wrapper = True
find_unused_parameters = False
runner = dict(
type='DynamicIterBasedRunner',
is_dynamic_ddp=False)
|
model_wrapper_cfg = dict(
type='MMSeparateDistributedDataParallel',
broadcast_buffers=False,
find_unused_parameters=False)
|
Dynamic Model in 0.x Version | Dynamic Model in 1.x Version |
---|---|
use_ddp_wrapper = False
find_unused_parameters = False
# Use DynamicRunner
runner = dict(
type='DynamicIterBasedRunner',
is_dynamic_ddp=True)
|
model_wrapper_cfg = dict(
type='MMSeparateDistributedDataParallel',
broadcast_buffers=False,
find_unused_parameters=True) # set `find_unused_parameters` for dynamic models
|