Shortcuts

mmagic.models.editors.eg3d.renderer

The renderer is a module that takes in rays, decides where to sample along each ray, and computes pixel colors using the volume rendering equation.

Module Contents

Classes

EG3DRenderer

Renderer for EG3D. This class samples render points on each input ray

EG3DDecoder

Decoder for EG3D model.

class mmagic.models.editors.eg3d.renderer.EG3DRenderer(decoder_cfg: dict, ray_start: float, ray_end: float, box_warp: float = 1, depth_resolution: int = 64, depth_resolution_importance: int = 64, density_noise: float = 0, clamp_mode: str = 'softplus', white_back: bool = True, projection_mode: str = 'Official')[source]

Bases: mmengine.model.BaseModule

Renderer for EG3D. This class samples render points on each input ray and interpolate the triplane feature corresponding to the points’ coordinates. Then, predict each point’s RGB feature and density (sigma) by a neural network and calculate the RGB feature of each ray by integration. Different from typical NeRF models, the decoder of EG3DRenderer takes triplane feature of each points as input instead of positional encoding of the coordinates.

Parameters
  • decoder_cfg (dict) – The config to build neural renderer.

  • ray_start (float) – The start position of all rays.

  • ray_end (float) – The end position of all rays.

  • box_warp (float) – The side length of the cube spanned by the triplanes. The box is axis-aligned, centered at the origin. The range of each axis is [-box_warp/2, box_warp/2]. If box_warp=1.8, it has vertices at the range of axis is [-0.9, 0.9]. Defaults to 1.

  • depth_resolution (int) – Resolution of depth, as well as the number of points per ray. Defaults to 64.

  • depth_resolution_importance (int) – Resolution of depth in hierarchical sampling. Defaults to 64.

  • clamp_mode (str) – The clamp mode for density predicted by neural renderer. Defaults to ‘softplus’.

  • white_back (bool) – Whether render a white background. Defaults to True.

  • projection_mode (str) – The projection method to mapping coordinates of render points to plane feature. The usage of this argument please refer to self.project_onto_planes() and https://github.com/NVlabs/eg3d/issues/67. Defaults to ‘Official’.

get_value(target: str, render_kwargs: Optional[dict] = None) Any[source]

Get value of target field.

Parameters
  • target (str) – The key of the target field.

  • render_kwargs (Optional[dict], optional) – The input key word arguments dict. Defaults to None.

Returns

The default value of target field.

Return type

Any

forward(planes: torch.Tensor, ray_origins: torch.Tensor, ray_directions: torch.Tensor, render_kwargs: Optional[dict] = None) Tuple[torch.Tensor][source]

Render 2D RGB feature, weighed depth and weights with the passed triplane features and rays. ‘weights’ denotes w in Equation 5 of the NeRF’s paper.

Parameters
  • planes (torch.Tensor) – The triplane features shape like (bz, 3, TriPlane_feat, TriPlane_res, TriPlane_res).

  • ray_origins (torch.Tensor) – The original of each ray to render, shape like (bz, NeRF_res * NeRF_res, 3).

  • ray_directions (torch.Tensor) – The direction vector of each ray to render, shape like (bz, NeRF_res * NeRF_res, 3).

  • render_kwargs (Optional[dict], optional) – The specific kwargs for rendering. Defaults to None.

Returns

Renderer RGB feature, weighted depths and

weights.

Return type

Tuple[torch.Tensor]

sample_stratified(ray_origins: torch.Tensor, ray_start: Union[float, torch.Tensor], ray_end: Union[float, torch.Tensor], depth_resolution: int) torch.Tensor[source]

Return depths of approximately uniformly spaced samples along rays.

Parameters
  • ray_origins (torch.Tensor) – The original of each ray, shape like (bz, NeRF_res * NeRF_res, 3). Only used to provide device and shape info.

  • ray_start (Union[float, torch.Tensor]) – The start position of rays. If a float is passed, all rays will have the same start distance.

  • ray_end (Union[float, torch.Tensor]) – The end position of rays. If a float is passed, all rays will have the same end distance.

  • depth_resolution (int) – Resolution of depth, as well as the number of points per ray.

Returns

The sampled coarse depth shape like

(bz, NeRF_res * NeRF_res, 1).

Return type

torch.Tensor

neural_rendering(planes: torch.Tensor, sample_coordinates: float, density_noise: float, box_warp: float) dict[source]

Predict RGB features and densities of the coordinates by neural renderer model and the triplane input.

Parameters
  • planes (torch.Tensor) – Triplane feature shape like (bz, 3, TriPlane_feat, TriPlane_res, TriPlane_res).

  • sample_coordinates (torch.Tensor) – Coordinates of the sampling points, shape like (bz, N_depth * NeRF_res * NeRF_res, 1).

  • density_noise (float) – Strength of noise add to the predicted density.

  • box_warp (float) – The side length of the cube spanned by the triplanes.

Returns

A dict contains RGB features (‘rgb’) and densities (‘sigma’).

Return type

dict

sample_from_planes(plane_features: torch.Tensor, coordinates: torch.Tensor, interp_mode: str = 'bilinear', box_warp: float = None) torch.Tensor[source]

Sample from feature from triplane feature with the passed coordinates of render points.

Parameters
  • plane_features (torch.Tensor) – The triplane feature.

  • coordinates (torch.Tensor) – The coordinates of points to render.

  • interp_mode (str) – The interpolation mode to sample feature from triplane.

  • box_warp (float) – The side length of the cube spanned by the triplanes.

Returns

The sampled triplane feature of the render points.

Return type

torch.Tensor

project_onto_planes(coordinates: torch.Tensor) torch.Tensor[source]

Project 3D points to plane formed by coordinate axes. In this function, we use indexing operation to replace matrix multiplication to achieve higher calculation performance.

In the original implementation, the mapping matrix is incorrect. Therefore we support users to define projection_mode to control projection behavior in the initialization function of EG3DRenderer. If you want to run inference with the official pretrained model, please remember to set projection_mode = ‘official’. More information please refer to https://github.com/NVlabs/eg3d/issues/67.

If the project mode official, the equivalent projection matrix is inverse matrix of:

[[[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 0, 1], [0, 1, 0]], [[0, 0, 1], [1, 0, 0], [0, 1, 0]]]

Otherwise, the equivalent projection matrix is inverse matrix of:

[[[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 0, 0], [0, 0, 1], [0, 1, 0]], [[0, 0, 1], [0, 1, 0], [1, 0, 0]]]

Parameters

coordinates (torch.Tensor) – The coordinates of the render points. shape like (bz, NeRF_res * NeRF_res * N_depth, 3).

Returns

The projected coordinates.

Return type

torch.Tensor

unify_samples(depths_c: torch.Tensor, colors_c: torch.Tensor, densities_c: torch.Tensor, depths_f: torch.Tensor, colors_f: torch.Tensor, densities_f: torch.Tensor) Tuple[torch.Tensor][source]

Sort and merge coarse samples and fine samples.

Parameters
  • depths_c (torch.Tensor) – Coarse depths shape like (bz, NeRF_res * NeRF_res, N_depth, 1).

  • colors_c (torch.Tensor) – Coarse color features shape like (bz, NeRF_res * NeRF_res, N_depth, N_feat).

  • densities_c (torch.Tensor) – Coarse densities shape like (bz, NeRF_res * NeRF_res, N_depth, 1).

  • depths_f (torch.Tensor) – Fine depths shape like (bz, NeRF_res * NeRF_res, N_depth_fine, 1).

  • colors_f (torch.Tensor) – Fine colors features shape like (bz, NeRF_res * NeRF_res, N_depth_fine, N_feat).

  • densities_f (torch.Tensor) – Fine densities shape like (bz, NeRF_res * NeRF_res, N_depth_fine, 1).

Returns

Unified depths, color features and densities.

The third dimension of returns are N_depth + N_depth_fine.

Return type

Tuple[torch.Tensor]

volume_rendering(colors: torch.Tensor, densities: torch.Tensor, depths: torch.Tensor) Tuple[torch.Tensor][source]

Volume rendering.

Parameters
  • colors (torch.Tensor) – Color feature for each points. Shape like (bz, N_points, N_depth, N_feature).

  • densities (torch.Tensor) – Density for each points. Shape like (bz, N_points, N_depth, 1).

  • depths (torch.Tensor) – Depths for each points. Shape like (bz, N_points, N_depth, 1).

Returns

A tuple of color feature

(bz, N_points, N_feature), weighted depth (bz, N_points, 1) and weight (bz, N_points, N_depth-1, 1).

Return type

Tuple[torch.Tensor]

sample_importance(z_vals: torch.Tensor, weights: torch.Tensor, N_importance: int) torch.Tensor[source]

Return depths of importance sampled points along rays.

Parameters
  • z_vals (torch.Tensor) – Coarse Z value (depth). Shape like (bz, N_points, N_depth, N_feature).

  • weights (torch.Tensor) – Weights of the coarse samples. Shape like (bz, N_points, N_depths-1, 1).

  • N_importance (int) – Number of samples to resample.

class mmagic.models.editors.eg3d.renderer.EG3DDecoder(in_channels: int, out_channels: int = 32, hidden_channels: int = 64, lr_multiplier: float = 1, rgb_padding: float = 0.001)[source]

Bases: mmengine.model.BaseModule

Decoder for EG3D model.

Parameters
  • in_channels (int) – The number of input channels.

  • out_channels (int) – The number of output channels. Defaults to 32.

  • hidden_channels (int) – The number of channels of hidden layer. Defaults to 64.

  • lr_multiplier (float, optional) – Equalized learning rate multiplier. Defaults to 1.

  • rgb_padding (float) – Padding for RGB output. Defaults to 0.001.

forward(sampled_features: torch.Tensor) dict[source]

Forward function.

Parameters

sampled_features (torch.Tensor) – The sampled triplane feature for each points. Shape like (batch_size, xxx, xxx, n_ch).

Returns

A dict contains rgb feature and sigma value for each point.

Return type

dict

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.