Recently, in the manufacturing industry, changes in various environmental conditions and constraints appear rapidly. At this time, a dispatching system that allocates work to resources at an appropriate time plays an important role in improving the speed or quality of production. In general, a rule-based static dispatching method has been widely used. However, this static approach to a dynamic production environment with uncertainty leads to several challenges, including decreased productivity, delayed delivery, and lower operating rates, etc. Therefore, a dynamic dispatching method is needed to address these challenges. This study aims to develop a reinforcement learning-based dynamic dispatching system, in which dispatching agents learn optimal dispatching rules for given environmental states. The state space represents various information such as WIP(work-in-process) and inventory levels, order status, machine status, and process status. A dispatching agent selects an optimal dispatching rule that considers multiple objectives of minimizing total tardiness and minimizing the number of setups at the same time. In particular, this study targets a multi-area manufacturing system consisting of a flow-shop area and a cellular-shop area. Thus, in addition to the dispatching agent that manages inputs to the flow-shop, a dispatching agent that manages transfers from the flow-shop to the cellular-shop is also developed. These two agents interact closely with each other. In this study, an agent-based dispatching system is developed and the performance is verified by comparing the system proposed in this study with the existing static dispatching method.
In the manufacturing industry, dispatching systems play a crucial role in enhancing production efficiency and optimizing production volume. However, in dynamic production environments, conventional static dispatching methods struggle to adapt to various environmental conditions and constraints, leading to problems such as reduced production volume, delays, and resource wastage. Therefore, there is a need for dynamic dispatching methods that can quickly adapt to changes in the environment. In this study, we aim to develop an agent-based model that considers dynamic situations through interaction between agents. Additionally, we intend to utilize the Q-learning algorithm, which possesses the characteristics of temporal difference (TD) learning, to automatically update and adapt to dynamic situations. This means that Q-learning can effectively consider dynamic environments by sensitively responding to changes in the state space and selecting optimal dispatching rules accordingly. The state space includes information such as inventory and work-in-process levels, order fulfilment status, and machine status, which are used to select the optimal dispatching rules. Furthermore, we aim to minimize total tardiness and the number of setup changes using reinforcement learning. Finally, we will develop a dynamic dispatching system using Q-learning and compare its performance with conventional static dispatching methods.
Animals must maintain proper balance between energy intake and expenditure. Recently, we descovered the enzymaticco-factor tetrahydrobiopterin (BH4) inhibits feeding in Drosophila. BH4 biosynthesis requires the sequential action of theconserved enzymes Punch, Purple, and Sepiapterin Reductase (Sptr). Although we observe increased feeding upon lossof Punch and Purple in the adult fat body, loss of Sptr must occur in the brain. We found Sptr expression is requiredin four adult brain neurons that express NPF, the fly homologue of the vertebrate appetite regulator NPY. Mechanistically,we found BH4 deficiency reduces NPF levels, while excess BH4 increases NPF accumulation without altering its expression.
두 종류의 Cronobacter 선택배지(DFI agar, R&F agar) 의 분유 및 건조호박 내 Cronobacter의 선택 분리능을 realtime PCR법과 함께 비교하였다. 분유에서의 Cronobacter 검출률은 세 가지 방법에서 유의적인 차이를 나타내지 않았으나(p < 0.05), 건조호박의 경우 R&F배지와 real-time PCR법이 DFI에서보다 유의적으로 높은 검출률을 보였다 (p < 0.05). 배지 간 선택성에 있어서도, R&F 선택배지는 건조호박에서 DFI에 비해 유의적으로 높은 선택성을 나타냈다(p < 0.05). Real-time PCR 및 R&F배지의 사용은 분유뿐만 아니라, 건조 호박 등의 높은 경쟁세균총을 갖는 영유아식의 원료로 사용될 수 있는 식품군에서도 Cronobacter를 효과적으로 검출할 수 있는 방법으로 사료된다.