논문 상세보기

심층 큐 신경망을 이용한 게임 에이전트 구현 KCI 등재

Deep Q-Network based Game Agents

  • 언어KOR
  • URLhttps://db.koreascholar.com/Article/Detail/378888
서비스가 종료되어 열람이 제한될 수 있습니다.
로봇학회논문지 (The Journal of Korea Robotics Society)
한국로봇학회 (Korea Robotics Society)
초록

The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and artificial fingers by 3D printing.

목차
Abstract
1. 서 론
2. 심층 강화 학습
    2.1 마르코프 결정 과정과 강화 학습
    2.2 Q-learning
    2.3 심층 Q 신경망 (Deep Q-Network)
    2.4 우선순위 경험 재생(Prioritized Experience Replay)
3. 테트리스 마르코프 결정 과정
    3.1 상태
    3.2 행동
    3.3 보상
4. 학습 방법
    4.1 인공 신경망 구조
    4.2 학습 알고리즘
5. 하드웨어 설계
    5.1 영상 인식 및 처리
6. 성능 평가
7. 결 론
References
저자
  • 한동기(Dept. of EIE, SeoulTech) | Dongki Han
  • 김명섭(Dept. of EIE, SeoulTech) | Myeongseop Kim
  • 김재윤(Dept. of EIE, SeoulTech) | Jaeyoun Kim
  • 김정수(Dept. of EIE, SeoulTech) | Jung-Su Kim Corresponding author