StyleCity: Large-Scale 3D Urban Scenes Stylization

Yingshu ChenHuajian Huang, Tuan-Anh VuKa Chun Shum, Sai-Kit Yeung
The Hong Kong University of Science and Technology

ECCV 2024

*Model trained on a single RTX 3090 GPU
TL;DR: StyleCity synthesizes style-aligned urban texture and 360° sky background, while keeping scene identity intact.
Our proposed novel StyleCity framework can automatically stylize the textured meshes of large-scale urban scenes and generate harmonic omnidirectional sky backgrounds in a controllable manner with input style prompts, i.e., image and text references. Our effective solution has many potential applications, such as making city exploration in Google Earth a more personalized and visually captivating experience.

Abstract

We introduce the first vision-and-text-driven texture stylization system for large-scale urban scenes, StyleCity. Taking an image and text as references, StyleCity stylizes a 3D textured mesh of a large-scale urban scene in a semantics-aware fashion and generates a harmonic omnidirectional sky background. To achieve that, we propose to stylize a neural texture field by transferring 2D vision-and-text priors to 3D globally and locally. During 3D stylization, we progressively scale the planned training views of the input 3D scene at different levels in order to preserve high-quality scene content. We then optimize the scene style globally by adapting the scale of the style image with the scale of the training views. Moreover, we enhance local semantics consistency by the semantics-aware style loss which is crucial for photo-realistic stylization. Besides texture stylization, we further adopt a generative diffusion model to synthesize a style-consistent omnidirectional sky image, which offers a more immersive atmosphere and assists the semantic stylization process. The stylized neural texture field can be baked into an arbitrary-resolution texture, enabling seamless integration into conventional rendering pipelines and significantly easing the virtual production prototyping process.

Framework

StyleCity Framework
Framework overview of StyleCity. Taking style image and text references as input, StyleCity optimizes the neural texture field in a semantics-aware fashion with progressively scaled training views. Synthesized omnidirectional sky enhances style atmosphere and assists semantic style supervision.
Style-Aligned Omnidirectional Sky Synthesis
(a) High-resolution sky synthesis with omnidirectional sampling.
(b) Panoramic sky synthesis comparisons with SyncDiffusion using same seeds and text prompts.

Features:

Results

*Note: all presented videos are compressed and may not fully showcase the details achieved in our method. Please access our high-resolution videos or render your own to fully appreciate the quality of our work.
Photorealistic Time-of-Day Effects
Input Styles

Hallucination of magic times of a day for a city.

Photorealism and Futurism styles
Artistic styles (text-guided editing)

Comparisons

Comparisons among instruct-NeRF2NeRF, ARF, StyleMesh and our proposed method. Our results have plausible texture and background and preserve the scene identity in different style transfer cases.

Comparisons with Google Earth.

Materials

Paper
Supplementary Doc
Poster
Extended Abstract
(CV2Metaverse Workshop)
StyleCity Explained
High-res Results
Code(TBA)
Data Collector
Online Viewer

Citation


@inproceedings{chen2024stylecity,
	title={StyleCity: Large-Scale 3D Urban Scenes Stylization},
	author={Chen, Yingshu and Huang, Huajian and Vu, Tuan-Anh and Shum, Ka Chun and Yeung, Sai-Kit},
	booktitle={Proceedings of the European Conference on Computer Vision},
	year={2024}
}

Related Works

Advances in 3D Neural Stylization: A Survey. 2023.
Disentangling Structure and Appearance in ViT Feature Space. TOG 2023.
SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions. NeurIPS 2023.
Time-of-Day Neural Style Transfer for Architectural Photographs. ICCP 2022.
Deep Photo Style Transfer. CVPR 2017.
Data-driven Hallucination for Different Times of Day from a Single Outdoor Photo. SIGGRAPH ASIA / TOG 2013.

Acknowlegements

The paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKUST 16202323) and an internal grant from HKUST (R9429).