검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 63

        1.
        2024.12 KCI 등재 구독 인증기관 무료, 개인회원 유료
        MES(manufacturing execution system) plays a critical role in improving production efficiency by managing operations across the entire manufacturing system. Conventional manufacturing systems employ a centralized control structure, which has limitations in terms of the flexibility, scalability and reconfigurability of the manufacturing system. Agent-based manufacturing systems, on the other hand, are better suited to dynamic environments due to their inherent high autonomy and reconfigurability. In this study, we propose an agent-based MES and present its collaboration model between agents along with a data structure. The agent-based MES consists of three types of core agents: WIPAgent, PAgent(processing agent), and MHAgent(material handling agent). The entire manufacturing execution process operates through collaboration among these core agents, and all collaboration is carried out through autonomous interactions between the agents. In particular, the order-by-order dispatching process and the WIP(work-in-process) routing process are represented as respective collaboration models to facilitate understanding and analyzing the processes. In addition, we define data specifications required for MES implementation and operation, and their respective structures and relationships. Moreover, we build a prototype system employing a simulation model of an exemplary shop-floor as a simulation test bed. The framework proposed in this study can be used as a basis for building an automated operating system in a distributed environment.
        4,300원
        2.
        2024.12 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Recently, in the manufacturing industry, changes in various environmental conditions and constraints appear rapidly. At this time, a dispatching system that allocates work to resources at an appropriate time plays an important role in improving the speed or quality of production. In general, a rule-based static dispatching method has been widely used. However, this static approach to a dynamic production environment with uncertainty leads to several challenges, including decreased productivity, delayed delivery, and lower operating rates, etc. Therefore, a dynamic dispatching method is needed to address these challenges. This study aims to develop a reinforcement learning-based dynamic dispatching system, in which dispatching agents learn optimal dispatching rules for given environmental states. The state space represents various information such as WIP(work-in-process) and inventory levels, order status, machine status, and process status. A dispatching agent selects an optimal dispatching rule that considers multiple objectives of minimizing total tardiness and minimizing the number of setups at the same time. In particular, this study targets a multi-area manufacturing system consisting of a flow-shop area and a cellular-shop area. Thus, in addition to the dispatching agent that manages inputs to the flow-shop, a dispatching agent that manages transfers from the flow-shop to the cellular-shop is also developed. These two agents interact closely with each other. In this study, an agent-based dispatching system is developed and the performance is verified by comparing the system proposed in this study with the existing static dispatching method.
        4,000원
        7.
        2024.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        In the manufacturing industry, dispatching systems play a crucial role in enhancing production efficiency and optimizing production volume. However, in dynamic production environments, conventional static dispatching methods struggle to adapt to various environmental conditions and constraints, leading to problems such as reduced production volume, delays, and resource wastage. Therefore, there is a need for dynamic dispatching methods that can quickly adapt to changes in the environment. In this study, we aim to develop an agent-based model that considers dynamic situations through interaction between agents. Additionally, we intend to utilize the Q-learning algorithm, which possesses the characteristics of temporal difference (TD) learning, to automatically update and adapt to dynamic situations. This means that Q-learning can effectively consider dynamic environments by sensitively responding to changes in the state space and selecting optimal dispatching rules accordingly. The state space includes information such as inventory and work-in-process levels, order fulfilment status, and machine status, which are used to select the optimal dispatching rules. Furthermore, we aim to minimize total tardiness and the number of setup changes using reinforcement learning. Finally, we will develop a dynamic dispatching system using Q-learning and compare its performance with conventional static dispatching methods.
        4,000원
        8.
        2024.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The threat of North Korea's long-range firepower is recognized as a typical asymmetric threat, and South Korea is prioritizing the development of a Korean-style missile defense system to defend against it. To address this, previous research modeled North Korean long-range artillery attacks as a Markov Decision Process (MDP) and used Approximate Dynamic Programming as an algorithm for missile defense, but due to its limitations, there is an intention to apply deep reinforcement learning techniques that incorporate deep learning. In this paper, we aim to develop a missile defense system algorithm by applying a modified DQN with multi-agent-based deep reinforcement learning techniques. Through this, we have researched to ensure an efficient missile defense system can be implemented considering the style of attacks in recent wars, such as how effectively it can respond to enemy missile attacks, and have proven that the results learned through deep reinforcement learning show superior outcomes.
        4,000원
        11.
        2024.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        실제세계에서 데이터 수집의 비용과 한계를 고려할 때, 시뮬레이션 생성 환경은 데이터 생성 과 다양한 시도에 있어 효율적인 대안이다. 이 연구에서는 Unity ML Agent를 로그라이크 장 르에 적합한 강화학습 모델로 구현하였다. 간단한 게임에Agent를 이식하고, 이 Agent가 적을 인식하고 대응하는 과정을 코드로 작성하였다. 초기 모델은 조준사격의 한계를 보였으나 RayPerceptionSensor-Component2D를 통해 Agent의 센서 정보를 직접 제공함으로써, Agent가 적을 감지하고 조준 사격을 하는 능력을 관찰할 수 있었다. 결과적으로, 개선된 모델 은 평균3.81배 향상된 성능을 보여주었으며, 이는 Unity ML Agent가 로그라이크 장르에서 강화학습을 통한 데이터 수집이 가능함을 입증한다.
        4,000원
        12.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Reinforcement learning (RL) is widely applied to various engineering fields. Especially, RL has shown successful performance for control problems, such as vehicles, robotics, and active structural control system. However, little research on application of RL to optimal structural design has conducted to date. In this study, the possibility of application of RL to structural design of reinforced concrete (RC) beam was investigated. The example of RC beam structural design problem introduced in previous study was used for comparative study. Deep q-network (DQN) is a famous RL algorithm presenting good performance in the discrete action space and thus it was used in this study. The action of DQN agent is required to represent design variables of RC beam. However, the number of design variables of RC beam is too many to represent by the action of conventional DQN. To solve this problem, multi-agent DQN was used in this study. For more effective reinforcement learning process, DDQN (Double Q-Learning) that is an advanced version of a conventional DQN was employed. The multi-agent of DDQN was trained for optimal structural design of RC beam to satisfy American Concrete Institute (318) without any hand-labeled dataset. Five agents of DDQN provides actions for beam with, beam depth, main rebar size, number of main rebar, and shear stirrup size, respectively. Five agents of DDQN were trained for 10,000 episodes and the performance of the multi-agent of DDQN was evaluated with 100 test design cases. This study shows that the multi-agent DDQN algorithm can provide successfully structural design results of RC beam.
        4,000원
        18.
        2018.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        음성인식 기술의 비약적 발전으로 최근 몇 년 사이 대화형 에이전트는 스마트폰, 인공지능 스피커 등을 통해 널리 보급되었다. 음성인식 기술의 인식의 정확도는 인간의 수준까지 발전하였으나, 여전히 말의 의미나 의도를 파악하는 것과 긴 대화를 이해하는 것 등에는 한계를 보이고 있다. 이에 따라 사용자는 대화형 에이전트를 사용함에 있어 다양한 오류 상황들을 경험하고 있으며 이는 사용자 경험에 부정적인 영향을 미칠 수 있다. 또한, 목소리를 주 인터페이스로 하는 인공지능 스피커의 경우, 대화형 에이전트의 기능 및 한계에 대한 피드백의 부족은 지속적 사용을 저해하는 요소로 꼽히고 있다. 따라서 사용자가 대화형 에이전트의 기능 및 한계를 보다 잘 이해하고 오류 상황에서 부정적인 감정을 완화할 수 있는 방안에 대한 연구에 대한 필요성이 높으나, 아직 관련 연구는 미비한 실정이다. 이에 본 연구에서 는 사회적 전략 중 ‘사전 양해’와 ‘사과’를 대화형 에이전트에 적용하고 이러한 전략이 사용자가 에이전트에 대해 느끼는 인식에 어떠한 영향을 미치는지 조사하였다. 이를 위해 사전 양해와 사과 여부를 나누어 사용자가 대화형 에이전트와 대화하는 데모 영상을 제작하였고, 실험참가자들에게 영상을 보여준 뒤 느끼는 호감도와 신뢰도를 설문을 통해 평가하도록 하였다. 총 104명의 응답을 분석한 결과, 문헌조사를 토대로 한 우리의 예상과는 상반되는 결과를 얻었다. 사전 양해는 오히려 사용자에게 부정적인 인상을 주었으며, 특히 에이전트에 대한 신뢰도에 부정적인 영향을 주었다. 또한 오류 상황에서의 사과는 사용자가 느끼는 호감도나 신뢰도에는 유의미한 영향을 미치지 않았다. 심층 인터뷰를 통해 원인을 파악한 결과, 실험참가자들은 인공지능 스피커를 사람과 같은 인격체보다는 단순한 기계에 가깝다고 인식했기 때문에 인간관계에 작용하는 사회적 전략이 영향력을 발휘하지 못한 것으로 해석된다. 이러한 결과는 사용자가 에이전트를 얼마나 기계, 혹은 사람에 가깝게 인식하는지에 따라 오류 상황에 대한 에이전트의 대처 방식 또한 달라져야 함을 보여준다.
        4,300원
        19.
        2015.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Recently, scheduling problems with position-dependent processing times have received considerable attention in the literature, where the processing times of jobs are dependent on the processing sequences. However, they did not consider cases in which each processed job has different learning or aging ratios. This means that the actual processing time for a job can be determined not only by the processing sequence, but also by the learning/aging ratio, which can reflect the degree of processing difficulties in subsequent jobs. Motivated by these remarks, in this paper, we consider a two-agent single-machine scheduling problem with linear job-dependent position-based learning effects, where two agents compete to use a common single machine and each job has a different learning ratio. Specifically, we take into account two different objective functions for two agents: one agent minimizes the total weighted completion time, and the other restricts the makespan to less than an upper bound. After formally defining the problem by developing a mixed integer non-linear programming formulation, we devise a branch-and-bound (B&B) algorithm to give optimal solutions by developing four dominance properties based on a pairwise interchange comparison and four properties regarding the feasibility of a considered sequence. We suggest a lower bound to speed up the search procedure in the B&B algorithm by fathoming any non-prominent nodes. As this problem is at least NP-hard, we suggest efficient genetic algorithms using different methods to generate the initial population and two crossover operations. Computational results show that the proposed algorithms are efficient to obtain near-optimal solutions.
        4,300원
        20.
        2014.12 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This paper addresses the emotion computing model for software affective agents. In this paper, emotion is represented in valence-arousal-dominance dimensions instead of discrete categorical representation approach. Firstly, a novel emotion model architecture for affective agents is proposed based on Scherer’s componential theories of human emotion, which is one of the well-known emotion models in psychological area. Then a fuzzy logic is applied to determine emotional statuses in the emotion model architecture, i.e., the first valence and arousal, the second valence and arousal, and dominance. The proposed methods are implemented and tested by applying them in a virtual training system for children’s neurobehavioral disorders.
        4,200원
        1 2 3 4