검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 9

        1.
        2024.12 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Recently, in the manufacturing industry, changes in various environmental conditions and constraints appear rapidly. At this time, a dispatching system that allocates work to resources at an appropriate time plays an important role in improving the speed or quality of production. In general, a rule-based static dispatching method has been widely used. However, this static approach to a dynamic production environment with uncertainty leads to several challenges, including decreased productivity, delayed delivery, and lower operating rates, etc. Therefore, a dynamic dispatching method is needed to address these challenges. This study aims to develop a reinforcement learning-based dynamic dispatching system, in which dispatching agents learn optimal dispatching rules for given environmental states. The state space represents various information such as WIP(work-in-process) and inventory levels, order status, machine status, and process status. A dispatching agent selects an optimal dispatching rule that considers multiple objectives of minimizing total tardiness and minimizing the number of setups at the same time. In particular, this study targets a multi-area manufacturing system consisting of a flow-shop area and a cellular-shop area. Thus, in addition to the dispatching agent that manages inputs to the flow-shop, a dispatching agent that manages transfers from the flow-shop to the cellular-shop is also developed. These two agents interact closely with each other. In this study, an agent-based dispatching system is developed and the performance is verified by comparing the system proposed in this study with the existing static dispatching method.
        4,000원
        4.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Reinforcement learning (RL) is widely applied to various engineering fields. Especially, RL has shown successful performance for control problems, such as vehicles, robotics, and active structural control system. However, little research on application of RL to optimal structural design has conducted to date. In this study, the possibility of application of RL to structural design of reinforced concrete (RC) beam was investigated. The example of RC beam structural design problem introduced in previous study was used for comparative study. Deep q-network (DQN) is a famous RL algorithm presenting good performance in the discrete action space and thus it was used in this study. The action of DQN agent is required to represent design variables of RC beam. However, the number of design variables of RC beam is too many to represent by the action of conventional DQN. To solve this problem, multi-agent DQN was used in this study. For more effective reinforcement learning process, DDQN (Double Q-Learning) that is an advanced version of a conventional DQN was employed. The multi-agent of DDQN was trained for optimal structural design of RC beam to satisfy American Concrete Institute (318) without any hand-labeled dataset. Five agents of DDQN provides actions for beam with, beam depth, main rebar size, number of main rebar, and shear stirrup size, respectively. Five agents of DDQN were trained for 10,000 episodes and the performance of the multi-agent of DDQN was evaluated with 100 test design cases. This study shows that the multi-agent DDQN algorithm can provide successfully structural design results of RC beam.
        4,000원
        6.
        2009.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This paper considers a multiagent scheduling problem under public information where a machine is shared by multiple agents. Each agent has a local objective among the minimization of total completion time and the minimization of maximum. In this problem.
        4,000원
        7.
        2017.09 KCI 등재 서비스 종료(열람 제한)
        In this paper, we present a finite-time sliding mode control (FSMC) with an integral finitetime sliding surface for applying the concept of graph theory to a distributed wheeled mobile robot (WMR) system. The kinematic and dynamic property of the WMR system are considered simultaneously to design a finite-time sliding mode controller. Next, consensus and formation control laws for distributed WMR systems are derived by using the graph theory. The kinematic and dynamic controllers are applied simultaneously to compensate the dynamic effect of the WMR system. Compared to the conventional sliding mode control (SMC), fast convergence is assured and the finite-time performance index is derived using extended Lyapunov function with adaptive law to describe the uncertainty. Numerical simulation results of formation control for WMR systems shows the efficacy of the proposed controller.
        8.
        2011.02 KCI 등재 서비스 종료(열람 제한)
        가상세계에서 다중 에이전트들의 집단행동을 사실적으로 시뮬레이션하는 것은 중요하다. 대부분의 기존 연구가 개방된 공간에서의 집단행동에 초점 맞춘 반면, 본 논문은 제한된 공간에서 조종력을 이용한 집단행동에 대하여 연구한다. 에이전트들은 제한된 공간에서 하나의 공동 목적지를 가지며, 다른 에이전트와 충돌을 피하면서 목적지로 이동하여야 한다. 이러한 환경에서 세 가지 가능한 에이전트 모델을 제시 하였으며, 각 모델에 필요한 여섯 가지 조종력도 제안하였다. 제안된 모델의 올바름을 보이기 위하여 모델들을 시뮬레이션 하였다. 시뮬레이션 결과는 벽들과 출입구에 따라 각 에이전트가 목적지에 도착하는 시간은 달랐지만, 단순히 조종력만을 사용하는 제안된 모델들이 제한된 공간에서 잘 동작하는 것을 보였다.
        9.
        2006.09 KCI 등재 서비스 종료(열람 제한)
        오늘날 스포츠, RTS, RPG 게임과 같이 멀티 플레이어가 한 팀을 이루는 집단형 방식의 게임은 팀 인공지능 기술이 더욱 필요하다. 기존의 독립적인 지능형 에이전트는 스스로 문제를 해결하는 자율성 향상 연구에 치중하였으나, 이는 다른 에이전트간의 협동 및 상호작용 능력이 부족하다. 이를 위해 본 논문은 다중에이전트 시스템에서 효과적인 역할 분담과 자율성을 갖는 레벨통합 접근방식을 소개한다. 복잡한 목표를 성취하기 위해 에이전트의 역할 정보를 이용하여 각자의 목표를 할당하고 각 에이전트는 맡은 역할을 동적인 환경에서 스스로 판단하고 행동한다. 팀 전체의 목표는 상호 보완된 역할 분담의 전략적인 측면에서 조정한다. 각 에이전트는 데이터보드를 이용하여 서로의 상태 정보를 공유하여 상호 협동을 유도한다. 역할이 할당된 각 에이전트는 스스로 계획기능을 갖고 있어 동적인 환경에서 적합한 행동을 취한다. 이 협동과 상호작용 과정에서 충돌의 문제점이 발생하는데 이를 제어하는 조정 에이전트의 역할을 전략적 측면에서 접근한다. 본 논문에서 제시하는 레벨통합 접근방식이 기존의 중앙 집권적 접근방식, 분권적 접근방식과 비교 실험하여 기존 방식보다 성능이 향상됨을 보인다.