As digital transformation accelerates, platform business has become a core business model in modern society. Platform business has a network effect where the winner takes all. For this reason, it is crucial for a company's pricing policy to attract as many customers as possible in the early stages of business. Telecommunication service companies are experiencing stagnant growth due to the saturation of the smartphone market and intensifying competition in rates, but the burden of maintaining communication networks is increasing due to the rapid increase in traffic caused by domestic and foreign CSPs. This study aims to understand the dynamic characteristics of the telecommunications market by focusing on pricing policy. To this end, we analyzed how ISPs, CSPs, and consumers react to changes in pricing policy based on the prisoner's dilemma theory. The analysis of the dynamic characteristics of the market was conducted through simulation using the Agent-Based Model.
In the manufacturing industry, dispatching systems play a crucial role in enhancing production efficiency and optimizing production volume. However, in dynamic production environments, conventional static dispatching methods struggle to adapt to various environmental conditions and constraints, leading to problems such as reduced production volume, delays, and resource wastage. Therefore, there is a need for dynamic dispatching methods that can quickly adapt to changes in the environment. In this study, we aim to develop an agent-based model that considers dynamic situations through interaction between agents. Additionally, we intend to utilize the Q-learning algorithm, which possesses the characteristics of temporal difference (TD) learning, to automatically update and adapt to dynamic situations. This means that Q-learning can effectively consider dynamic environments by sensitively responding to changes in the state space and selecting optimal dispatching rules accordingly. The state space includes information such as inventory and work-in-process levels, order fulfilment status, and machine status, which are used to select the optimal dispatching rules. Furthermore, we aim to minimize total tardiness and the number of setup changes using reinforcement learning. Finally, we will develop a dynamic dispatching system using Q-learning and compare its performance with conventional static dispatching methods.
Spare part management is very important to products that have large number of parts and long lifecycle such as automobile and aircraft. Supply chain must support immediate procurement for repair. However, it is not easy to handle spare parts efficiently due to huge stock keeping units. Qualified forecasting is the basis for the supply chain to achieve the goal. In this paper, we propose an agent based modeling approach that can deal with various factors simultaneously without mathematical modeling. Simulation results show that the proposed method is reasonable to describe demand generation process, and consequently, to forecast demand of spare parts in long-term perspective.