Overview of Multi-scale VQVAE. Given a 3D model, we leverage multi-view RGB-D (depth) renderings and Plücker embeddings as the input to our multi-view encoder 𝓔. The encoder predicts a continuous feature map that is then quantized by the multi-scale quantizer 𝓠, giving R = (r₁, r₂, ..., rₖ) of latent tri-plane features. Each code of different scales shares the same codebook. The triplane decoder then converts the quantized latent triplane features into the triplane representation through a plane-wise manner. The predicted triplane is multi-view supervised with the ground truth image, depth, and normal.
Overview of 3D Generation and 3D Understanding. Given a 3D model, our 3D VQVAE encodes it into multi-scale discrete tokens for both 3D generation and understanding.
In (a) 3D Generation, text or a single image is encoded by CLIPT or DINOv2, and the encoded condition features are integrated into the decoder-only transformer via cross attention. The transformer then causally predicts each scale of the latent triplane.
In (b) 3D Understanding, truncated 3D tokens are first processed with an MLP projector. The large language model receives a multimodal sequence of text and 3D tokens and generates a detailed caption describing the input 3D model.
replace me