DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation

Wang Zhao1, Yan-Pei Cao2, Jiale Xu1, Yuejiang Dong1,3, Ying Shan1
1ARC Lab, Tencent PCG    2VAST    3Tsinghua University   


Teaser Given condition images, DI-PCG can accurately estimate suitable parameters of procedural generators, resulting high fidelity 3D asset creation. Textures and materials are randomly assigned by the procedural generators for visualizations.

Abstract

Procedural Content Generation (PCG) is powerful in creating high-quality 3D contents, yet controlling it to produce desired shapes is difficult and often requires extensive parameter tuning. Inverse Procedural Content Generation aims to automatically find the best parameters under the input condition. However, existing sampling-based and neural network-based methods still suffer from numerous sample iterations or limited controllability. In this work, we present DI-PCG, a novel and efficient method for Inverse PCG from general image conditions. At its core is a lightweight diffusion transformer model, where PCG parameters are directly treated as the denoising target and the observed images as conditions to control parameter generation. DI-PCG is efficient and effective. With only 7.6M network parameters and 30 GPU hours to train, it demonstrates superior performance in recovering parameters accurately, and generalizing well to in-the-wild images. Quantitative and qualitative experiment results validate the effectiveness of DI-PCG in inverse PCG and image-to-3D generation tasks. DI-PCG offers a promising approach for efficient inverse PCG and represents a valuable exploration step towards a 3D generation path that models how to construct a 3D asset using parametric models.

pipeline

Pipeline. Overview of DI-PCG. (Left) The procedural generator consists of programs and parameters, and can be randomly sampled to produce various shapes. (Right) To control it with images, DI-PCG trains a denoising diffusion model directly upon canonicalized generator parameters, using DINOv2 to extract condition image features and inject them via cross attention. The resulting parameters are projected back to original ranges and then fed into the generator, delivering high-quality 3D generation with neat geometry and meshing.

Image Conditioned 3D Generation

chair_001
chair_020
chair_001
chair_020
table_002
table_003
table_005
table_015
vase_001
vase_004
vase_010
vase_012
basket_006
basket_009
flower_003
flower_006
dandelion_001
Input Image
Generation
dandelion_005
Input Image
Generation

Sketch Conditioned Generation

chair_sketch_004
table_sketch_004
vase_sketch_004
basket_sketch_004
flower_005
Input Image
Generation
dandelion_009
Input Image
Generation

Comparison

Editing

BibTeX

@article{zhao2024di-pcg,
    title={DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation},
    author={Zhao, Wang and Cao, Yanpei and Xu, Jiale and Dong, Yuejiang and Shan, Ying},
    journal={arXiv preprint arXiv:2412.15200},
    year={2024},
}