PhotoFramer: Multi-modal Image Composition Instruction

Jing Li Kai Wang Chen Lin Wei Zhang
Institute for Advanced Creative Technologies, University of Techville

Abstract

This paper introduces PhotoFramer, a novel framework for multi-modal image composition instruction, enabling users to generate images with precise control over composition. It integrates textual descriptions, reference images, and spatial guidance to steer the generative process. Our method demonstrates superior performance in generating high-quality, compositionally accurate images compared to existing single-modal approaches. The findings highlight the efficacy of multi-modal inputs in achieving fine-grained control for creative content generation.

Keywords

Image Composition, Multi-modal Instruction, Generative AI, Diffusion Models, Creative Tools


1. Introduction

Recent advancements in generative AI have revolutionized image creation, yet achieving precise compositional control remains a significant challenge, often requiring tedious trial-and-error. This work addresses the problem of enabling intuitive and fine-grained image composition through diverse instructional modalities. We propose PhotoFramer, a system designed to interpret and combine textual descriptions, visual references, and spatial cues to guide image generation effectively. The primary models used in this article include PhotoFramer (our proposed multi-modal composition network), a modified latent diffusion model, and a vision-language encoder for input interpretation.

2. Related Work

Existing literature primarily focuses on text-to-image generation or image editing based on textual prompts, often struggling with complex compositional demands. Methods employing sketch-based or reference-image guidance have shown promise but lack the flexibility of combining multiple input types. While some multi-modal approaches exist for image understanding, their application to precise image generation instruction is less explored. Our work builds upon these foundations by integrating and harmonizing diverse input modalities for enhanced creative control.

3. Methodology

PhotoFramer's methodology involves a multi-stage process, starting with the encoding of various input modalities: text prompts are processed by a large language model, reference images by a pre-trained vision transformer, and spatial guidance via a segmentation network. These encoded features are then fused using a novel cross-attention mechanism within a U-Net architecture, adapted from a latent diffusion model. This fusion allows for the coherent integration of compositional instructions, guiding the iterative denoising process to synthesize an image that precisely adheres to all specified inputs. The training process involves a custom dataset of multi-modal instruction-image pairs.

4. Experimental Results

Experiments demonstrate that PhotoFramer significantly outperforms baseline methods in terms of compositional accuracy and image quality. User studies consistently rated PhotoFramer-generated images as more faithful to the instructions and aesthetically pleasing. Quantitative metrics, such as FID and CLIP score, further confirm the superior performance of our multi-modal approach compared to text-only or single-modal guidance systems. The following table summarizes key performance metrics across different approaches, showcasing PhotoFramer's robust capabilities.

MethodFID Score ↓CLIP Score ↑Compositional Accuracy (User Study %) ↑
Text-Only Baseline18.50.2862.1
Sketch-Guided Baseline16.20.3175.4
PhotoFramer (Ours)10.30.3991.8
The table clearly indicates that PhotoFramer achieves a lower FID score, indicating higher image quality and diversity, and a higher CLIP score, signifying better alignment with semantic prompts. Furthermore, the user study results for compositional accuracy highlight PhotoFramer's superior ability to follow intricate multi-modal instructions.

5. Discussion

The results underscore the critical importance of multi-modal input for achieving fine-grained control in generative image composition, demonstrating that PhotoFramer effectively bridges the gap between high-level creative intent and precise visual output. The fusion of diverse modalities enables unprecedented control, leading to more satisfying and controllable image generation experiences. Future work could explore incorporating temporal consistency for video generation or extending the framework to 3D scene composition, further expanding its creative applications.