논문 상세보기

Instruction Fine-tuning and LoRA Combined Approach for Optimizing Large Language Models KCI 등재

대규모 언어 모델의 최적화를 위한 지시형 미세 조정과 LoRA 결합 접근법

  • 언어KOR
  • URLhttps://db.koreascholar.com/Article/Detail/435325
구독 기관 인증 시 무료 이용이 가능합니다. 4,500원
한국산업경영시스템학회지 (Journal of Society of Korea Industrial and Systems Engineering)
한국산업경영시스템학회 (Society of Korea Industrial and Systems Engineering)
초록

This study introduces and experimentally validates a novel approach that combines Instruction fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning to optimize the performance of Large Language Models (LLMs). These models have become revolutionary tools in natural language processing, showing remarkable performance across diverse application areas. However, optimizing their performance for specific domains necessitates fine-tuning of the base models (FMs), which is often limited by challenges such as data complexity and resource costs. The proposed approach aims to overcome these limitations by enhancing the performance of LLMs, particularly in the analysis precision and efficiency of national Research and Development (R&D) data. The study provides theoretical foundations and technical implementations of Instruction fine-tuning and LoRA fine-tuning. Through rigorous experimental validation, it is demonstrated that the proposed method significantly improves the precision and efficiency of data analysis, outperforming traditional fine-tuning methods. This enhancement is not only beneficial for national R&D data but also suggests potential applicability in various other data-centric domains, such as medical data analysis, financial forecasting, and educational assessments. The findings highlight the method's broad utility and significant contribution to advancing data analysis techniques in specialized knowledge domains, offering new possibilities for leveraging LLMs in complex and resource- intensive tasks. This research underscores the transformative potential of combining Instruction fine-tuning with LoRA fine-tuning to achieve superior performance in diverse applications, paving the way for more efficient and effective utilization of LLMs in both academic and industrial settings.

목차
1. 서 론
2. 선행연구
    2.1 대규모 언어 모델(LLMs)의 실제 활용
    2.2 LLMs의 향후 발전 방향
    2.3 FM 기반 미세 조정
3. 연구 방법론
    3.1 연구방법론 설계
    3.2 데이터 구성과 사전 처리
    3.3 FM의 미세 조정 절차
    3.4 LoRA 기반 PEFT 절차
4. 주요 실험 결과
    4.1 생성형 인공지능 모델 평가 방법
    4.2 미세 조정된 FM의 성능 평가
    4.3 주요 발견
5. 결 론
Acknowledgement
References
저자
  • Sang-Gook Kim(Korea Institute of Science and Technology Information) | 김상국 (한국과학기술정보연구원) Corresponding author
  • Kyungran Noh(Korea Institute of Science and Technology Information) | 노경란 (한국과학기술정보연구원)
  • Hyuk Hahn(Korea Institute of Science and Technology Information) | 한혁 (한국과학기술정보연구원)
  • Boong Kee Choi(Korea Institute of Science and Technology Information) | 최붕기 (한국과학기술정보연구원)