검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 1

        1.
        2023.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        With the recent advent of Metaverse, the character industry that reflects the characteristics of users' faces is drawing attention. there is a hassle that users have to select face components such as eyes, nose, and mouth one by one. In this paper, we propose a diffusion-based model that automatically generates characters from content human photographs. Our model generates user artistic characters by reflecting content information such as face angle, direction, and shape of a content human photo. In particular, our model automatically analyzes detailed information such as glasses and whiskers from content photo images and reflects them in artistic characters generated. Our network generates the final character through a three-step: diffusion process, UNet, and denoising processes. We use image encoders and CLIP encoders for the connection between style and input data. In the diffusion process, a collection of noise vectors is gradually added to a style vector to enable lossless learning of the detailed styles. All input values except for the style images are vectorized with CLIP encoders and then learned with noise style vectors in the UNet. Subsequently, noise is removed from the vectors through the UNet to obtain the artistic character image. We demonstrate our performance by comparing the results of other models with our results. Our method reflects content information without loss and generates natural high-definition characters.
        4,000원