This study proposes a method for achieving more intuitive and efficient in-game character facial customization using Generative AI. Conventional avatar customization in MMORPGs is generally limited by fixed options, making it difficult to accurately recreate a desired appearance and requiring extensive work time. To address these issues, we employ Stable Diffusion to generate character facial images from text prompts, and then map the landmark data extracted by a facial recognition algorithm to the customization variables. Consequently, we confirmed that primary facial features eyes, nose, mouth, and so on could be integrated without the need for additional 3D modeling, allowing for faster and more creative customization than traditional manual adjustment. Moreover, by mapping the generated images onto a character’s appearance, we preserved the game’s narrative framework and polygon structure while maintaining a high degree of similarity in key facial elements such as eye shape, lips, and jawline. In future work, we intend to fine-tune the model using game-specific datasets and extend the customization scope to include clothing, items, and environments. This study not only demonstrates the potential of automated avatar customization through Generative AI but also serves as a foundational reference for novel approaches in subsequent game development.