Pixelpiece3 May 2026
This paper explores the transition from latent-space diffusion models to pixel-space diffusion generation . We address the "flying pixel" artifact—a common byproduct of Variational Autoencoder (VAE) compression—by performing diffusion directly in the pixel domain. By leveraging semantics-prompted diffusion , our approach ensures high-quality point cloud reconstruction from single-view images. 1. Introduction
Moving diffusion to the pixel space represents a significant leap in the fidelity of generated depth maps. This has direct implications for high-resolution 3D reconstruction and augmented reality applications where depth precision is paramount. Pixelpiece3
Comparison against NYU Depth V2 and KITTI datasets. Pixelpiece3
How high-level semantic cues guide the diffusion process to differentiate between overlapping object boundaries. Pixelpiece3