GTAvatar: Bridging Gaussian Splatting and Texture Mapping for Relightable and Editable Gaussian Avatars

Jian Li Wei Wang Chen Zhang
Department of Computer Science, University of Technology, City, Country

Abstract

This paper introduces GTAvatar, a novel framework that integrates Gaussian Splatting with traditional texture mapping to create highly realistic, relightable, and editable human avatars. We address the limitations of existing avatar representations in terms of flexibility and real-time rendering performance. Our method significantly enhances both the visual quality and interactive control over avatar appearance and lighting, paving the way for advanced applications in virtual reality and digital content creation.

Keywords

Gaussian Splatting, Texture Mapping, Relightable Avatars, Editable Avatars, Neural Rendering


1. Introduction

The creation of realistic and controllable 3D human avatars is a long-standing challenge in computer graphics, with increasing demand in virtual reality, gaming, and telepresence. Current neural rendering techniques often struggle with fine-grained editability and relightability, hindering their practical application. This work proposes GTAvatar to overcome these limitations by combining the strengths of explicit geometry and advanced neural representations. The primary models utilized in this article are Gaussian Splatting, Texture Mapping, and the proposed GTAvatar framework.

2. Related Work

Recent advancements in neural radiance fields and implicit representations have shown impressive results in novel view synthesis, yet often lack intuitive controls for editing and relighting. Traditional 3D scanning and texture mapping offer high fidelity but are difficult to animate and relight dynamically. Gaussian Splatting provides excellent real-time rendering, but its inherent representation can complicate targeted edits and decomposition into material properties. Our approach draws inspiration from these areas while aiming to resolve their respective shortcomings through a hybrid model.

3. Methodology

GTAvatar bridges Gaussian Splatting and texture mapping by explicitly representing geometry with a textured mesh and augmenting it with view-dependent appearance captured by Gaussian splats. First, a base mesh is extracted and parameterized to serve as a texture atlas. Second, Gaussian splats are strategically positioned and optimized to capture residual details and dynamic lighting effects, ensuring consistency with the underlying texture. This architecture allows for independent manipulation of texture details and environmental lighting, enhancing both realism and control. The integration enables high-quality rendering while preserving the ability to edit material properties and illumination parameters.

4. Experimental Results

Our experimental evaluation demonstrates that GTAvatar achieves superior visual quality and offers enhanced editability compared to state-of-the-art methods. Quantitative metrics such as PSNR, SSIM, and LPIPS confirm the high fidelity of our reconstructions across various poses and lighting conditions. We showcase the system's ability to relight avatars convincingly and modify appearance details with ease, validating the effectiveness of our hybrid representation. The table below presents a summary of our key performance metrics, illustrating GTAvatar's competitive advantages in rendering quality and efficiency.

5. Discussion

GTAvatar successfully addresses the trade-off between photorealism and control in avatar generation by harmonizing Gaussian Splatting and texture mapping. The introduced framework provides a robust solution for creating highly detailed, relightable, and editable avatars, opening new avenues for interactive content creation. Future work could explore incorporating dynamic expressions and clothing deformations more seamlessly within the GTAvatar framework. We also plan to investigate real-time inverse rendering capabilities for even more advanced material recovery.