검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 4

        1.
        2025.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This study presents a novel methodology for analyzing disease relationships from a network perspective using Large Language Model (LLM) embeddings. We constructed a disease network based on 4,489 diseases from the International Classification of Diseases (ICD-11) using OpenAI’s text-embedding-3-small model. Network analysis revealed that diseases exhibit small-world characteristics with a high clustering coefficient (0.435) and form 16 major communities. Notably, mental health-related diseases showed high centrality in the network, and a clear inverse relationship was observed between community size and internal density. The embedding-based relationship analysis revealed meaningful patterns of disease relationships, suggesting the potential of this methodology as a novel tool for studying disease associations. Results suggest that mental health conditions play a more central role in disease relationships than previously recognized, and disease communities show distinct organizational patterns. This approach shows promise as a valuable tool for exploring large-scale disease relationships and generating new research hypotheses.
        4,000원
        2.
        2025.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        생성형 인공지능의 급속한 발전은 사회 전반에 광범위한 영향을 미치며, 일상생활을 포함한 다양한 분야 에 활용되고 있다. 본 연구에서는 인공지능 기술의 발전 동향을 대규모 언어모델(Large Language Models, LLM)을 중심으로 살펴보고 생성형 인공지능 기반 솔루션이 정치 및 공공 부문의 효율성과 서비스 최적화 에 기여하고 있음을 확인하였다. 본 연구는 미국, 싱가포르, 인도 등의 사례분석을 통해 인공지능 도구가 선거의 확장성과 시민과의 상호작용 개선에 역할 할 수 있다는 것을 주장한다. 동시에, 대규모 언어모델의 실사용 과정에서 제기되는 편향성, 허위정보 확산, 규제 공백 등의 쟁점들을 고찰할 필요가 있음을 지적한 다. 요컨대, 생성형 인공지능은 민주주의 발전과 공공서비스 증진에 대한 가능성을 제공하지만, 기술의 지속 가능하고 적실한 활용을 위해 투명성, 공정성과 책임성을 고려한 사용이 요구된다.
        7,700원
        3.
        2024.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This study introduces and experimentally validates a novel approach that combines Instruction fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning to optimize the performance of Large Language Models (LLMs). These models have become revolutionary tools in natural language processing, showing remarkable performance across diverse application areas. However, optimizing their performance for specific domains necessitates fine-tuning of the base models (FMs), which is often limited by challenges such as data complexity and resource costs. The proposed approach aims to overcome these limitations by enhancing the performance of LLMs, particularly in the analysis precision and efficiency of national Research and Development (R&D) data. The study provides theoretical foundations and technical implementations of Instruction fine-tuning and LoRA fine-tuning. Through rigorous experimental validation, it is demonstrated that the proposed method significantly improves the precision and efficiency of data analysis, outperforming traditional fine-tuning methods. This enhancement is not only beneficial for national R&D data but also suggests potential applicability in various other data-centric domains, such as medical data analysis, financial forecasting, and educational assessments. The findings highlight the method's broad utility and significant contribution to advancing data analysis techniques in specialized knowledge domains, offering new possibilities for leveraging LLMs in complex and resource- intensive tasks. This research underscores the transformative potential of combining Instruction fine-tuning with LoRA fine-tuning to achieve superior performance in diverse applications, paving the way for more efficient and effective utilization of LLMs in both academic and industrial settings.
        4,500원