As various types of products are produced in a single production system, it is important to determine a scheduling policy that selects one of the different types. In addition, the failure of processes in a line need to be considered due to machine failure, raw material supply and demand, quality issues, and worker absence, etc. Therefore, we studied production systems with various product types, dedicated buffers for each product type, Bernoulli equipment, and WIP-based scheduling or cyclic scheduling. To analyze such system exactly, we introduced a method to analyze the performance such as production rate, WIP level, blocking probability and starvation probability based on Markov chains and derived various characteristics. Especially, assuming that equipment does not need to select the type it just tried, the flow rate is no longer conserved and increasing buffer capacity does not guarantee increase production rate. The performance comparison between WIP-based and cyclic policy is studied as well.
Recently, in the manufacturing industry, changes in various environmental conditions and constraints appear rapidly. At this time, a dispatching system that allocates work to resources at an appropriate time plays an important role in improving the speed or quality of production. In general, a rule-based static dispatching method has been widely used. However, this static approach to a dynamic production environment with uncertainty leads to several challenges, including decreased productivity, delayed delivery, and lower operating rates, etc. Therefore, a dynamic dispatching method is needed to address these challenges. This study aims to develop a reinforcement learning-based dynamic dispatching system, in which dispatching agents learn optimal dispatching rules for given environmental states. The state space represents various information such as WIP(work-in-process) and inventory levels, order status, machine status, and process status. A dispatching agent selects an optimal dispatching rule that considers multiple objectives of minimizing total tardiness and minimizing the number of setups at the same time. In particular, this study targets a multi-area manufacturing system consisting of a flow-shop area and a cellular-shop area. Thus, in addition to the dispatching agent that manages inputs to the flow-shop, a dispatching agent that manages transfers from the flow-shop to the cellular-shop is also developed. These two agents interact closely with each other. In this study, an agent-based dispatching system is developed and the performance is verified by comparing the system proposed in this study with the existing static dispatching method.
This study explores the utilization level of smart manufacturing systems in the value chain processes of manufacturing and empirically examines the effect of the utilization level of these systems on manufacturing competitiveness in SMEs. Smart manufacturing systems in the value chain processes are categorized into Sales, Purchasing, Production & Logistics, and Support systems. By analyzing the research model using structural equation modeling, this study identifies that Sales systems, Purchasing systems, Production & Logistics systems, and Support systems have a significant impact on manufacturing process efficiency. Additionally, Production & Logistics systems and manufacturing process efficiency positively and significantly influence manufacturing competitiveness. The findings suggest that the utilization of information is directly and positively related to manufacturing process efficiency, including reducing lead-time, decreasing work performance man-hours (M/H), and improving work accuracy. These improvements ultimately have a significant impact on manufacturing competitiveness. In conclusion, the use of smart manufacturing systems is becoming an integral part of the manufacturing industry. To gain a competitive edge, it will be necessary to introduce and utilize optimal smart manufacturing systems, taking into account the size of manufacturing firms.
Defective product data is often very few because it is difficult to obtain defective product data while good product data is rich in manufacturing system. One of the frequently used methods to resolve the problems caused by data imbalance is data augmentation. Data augmentation is a method of increasing data from a minor class with a small number of data to be similar to the number of data from a major class with a large number of data. BAGAN-GP uses an autoencoder in the early stage of learning to infer the distribution of the major class and minor class and initialize the weights of the GAN. To resolve the weight clipping problem where the weights are concentrated on the boundary, the gradient penalty method is applied to appropriately distribute the weights within the range. Data augmentation techniques such as SMOTE, ADASYN, and Borderline-SMOTE are linearity-based techniques that connect observations with a line segment and generate data by selecting a random point on the line segment. On the other hand, BAGAN-GP does not exhibit linearity because it generates data based on the distribution of classes. Considering the generation of data with various characteristics and rare defective data, MO1 and MO2 techniques are proposed. The data is augmented with the proposed augmentation techniques, and the performance is compared with the cases augmented with existing techniques by classifying them with MLP, SVM, and random forest. The results of MO1 is good in most cases, which is believed to be because the data was augmented more diversely by using the existing oversampling technique based on linearity and the BAGAN-GP technique based on the distribution of class data, respectively.
Recently, there have been studies on space and time priority queues, where space priorities are given to a class of packets that are sensitive to loss, and time priorities to another class of packets that are sensitive to delay. However, these studies have been restricted to such models with push-out space priorities. In this paper, we extend the studies to the space and time priority M/G/1 model with partial-buffer-sharing (PBS) space priorities, where the whole buffer is divided into two regions: one is shared by packets of all classes and the other is dedicated only for packets of the higher space-priority class. Since the PBS space-priority mechanism can be implemented more readily in communication systems than the push-out one, there have been a lot of contributions on PBS space-priority queues. However, there are no contributions on space and time priority queues with PBS space priorities. To analyze the proposed queueing model, we first study the probabilistic structure of the service time of a packet, which is more involved to analyze than the push-out alternative because it may be divided into three different regimes: a regime (S-period) from the beginning of the service until the shared buffer region becomes full, a second one (P-period) from the end of the S-period until the whole buffer becomes full, a third one (F-period) from the end of the P-period until the end of the service. Using the distributions of the S-, P-, F-periods, we then construct and analyze the embedded Markov chain and the corresponding semi-Markov process governing the system state, and also derive system performance measures such as expected sojourn times and loss probabilities of different priority classes of packets. In numerical examples, we finally explore the effect of the shared buffer size, which is a major system control parameter of PBS priority queues, and the distributions of the service times of packets of different classes on the system performance measures.
As the Fourth Industrial Revolution advances, smart factories have become a new manufacturing paradigm, integrating technologies such as Information and Communication Technology (ICT), the Internet of Things (IoT), Artificial Intelligence (AI), and big data analytics to overcome traditional manufacturing limitations and enhance global competitiveness. This study offers a comprehensive approach by evaluating both technological and economic performance of smart factory Research and Development (R&D) projects, addressing gaps in previous studies that focused narrowly on either aspect. The research combines Latent Dirichlet Allocation (LDA) topic modeling and Data Envelopment Analysis (DEA) to quantitatively compare the efficiency of various topics. This integrated approach not only identifies key research themes but also evaluates how effectively resources are utilized within each theme, supporting strategic decision-making for optimal resource allocation. Additionally, non-parametric statistical tests are applied to detect performance differences between topics, providing insights into areas of comparative advantage. Unlike traditional DEA methods, which face limitations in generalizing results, this study offers a more nuanced analysis by benchmarking efficiency across thematic areas. The findings highlight the superior performance of projects incorporating AI, IoT, and big data, as well as those led by the Ministry of Trade, Industry, and Energy (MOTIE) and small and medium-sized enterprises (SMEs). The regional analysis reveals significant contributions from non-metropolitan areas, emphasizing the need for balanced development. This research provides policymakers and industry leaders with strategic insights, guiding the efficient allocation of R&D resources and fostering the development of smart factories aligned with global trends and national goals.
This study analyzes the impact of ESG (Environmental, Social, and Governance) activities on Corporate Financial Performance(CFP) using machine learning techniques. To address the linear limitations of traditional multiple regression analysis, the study employs AutoML (Automated Machine Learning) to capture the nonlinear relationships between ESG activities and CFP. The dataset consists of 635 companies listed on KOSPI and KOSDAQ from 2013 to 2021, with Tobin's Q used as the dependent variable representing CFP. The results show that machine learning models outperformed traditional regression models in predicting firm value. In particular, the Extreme Gradient Boosting (XGBoost) model exhibited the best predictive performance. Among ESG activities, the Social (S) indicator had a positive effect on CFP, suggesting that corporate social responsibility enhances corporate reputation and trust, leading to long-term positive outcomes. In contrast, the Environmental (E) and Governance (G) indicators had negative effects in the short term, likely due to factors such as the initial costs associated with environmental investments or governance improvements. Using the SHAP (Shapley Additive exPlanations) technique to evaluate the importance of each variable, it was found that Return on Assets (ROA), firm size (SIZE), and foreign ownership (FOR) were key factors influencing CFP. ROA and foreign ownership had positive effects on firm value, while major shareholder ownership (MASR) showed a negative impact. This study differentiates itself from previous research by analyzing the nonlinear effects of ESG activities on CFP and presents a more accurate and interpretable prediction model by incorporating machine learning and XAI (Explainable AI) techniques.
Micro display technology has become a key element of digital innovation, gaining attention with immersive technologies like AR/VR, and advancing in fields such as education, military, and healthcare. This study aims to analyze micro display technology trends, which are increasingly important across industries, and to provide insights that help companies develop strategies for related technologies. Using the patent database, BERTopic, a recent topic modeling technique, was applied to identify key themes in micro display technology development. Time-series analysis was also conducted to examine changes in research trends over time. These findings are expected to guide companies and research institutions in setting future technology development directions and strategies.
Recently, with the development of industrial technology and the increase of young consumers, engine monitoring devices for small ships are rapidly changing from analog devices to LCD-based digital devices. In addition, consumers’ product selection criteria are gradually increasing in favor of luxurious and emotional products rather than price attractiveness. Therefore, in order to develop differentiated products in marketing, it is necessary to find and improve emotionally attractive quality elements. The purpose of this study is to collect 11 customer requirements related to the emotional quality of DGP (Digital Gauge Panel) for small ships through customer interviews and to find attractive quality elements among the emotional qualities of DGP for small ships. 17 design elements were derived by applying QFD to the collected customer requirements, and they were classified into one-dimensional quality, must be quality, and attractive quality through Kano model analysis, and 6 attractive quality elements were confirmed using Timko customer satisfaction index.
The purpose of this study is to develop a timely fall detection system aimed at improving elderly care, reducing injury risks, and promoting greater independence among older adults. Falls are a leading cause of severe complications, long-term disabilities, and even mortality in the aging population, making their detection and prevention a crucial area of public health focus. This research introduces an innovative fall detection approach by leveraging Mediapipe, a state-of-the-art computer vision tool designed for human posture tracking. By analyzing the velocity of keypoints derived from human movement data, the system is able to detect abrupt changes in motion patterns, which are indicative of potential falls. To enhance the accuracy and robustness of fall detection, this system integrates an LSTM (Long Short-Term Memory) model specifically optimized for time-series data analysis. LSTM's ability to capture critical temporal shifts in movement patterns ensures the system's reliability in distinguishing falls from other types of motion. The combination of Mediapipe and LSTM provides a highly accurate and robust monitoring system with a significantly reduced false-positive rate, making it suitable for real-world elderly care environments. Experimental results demonstrated the efficacy of the proposed system, achieving an F1 score of 0.934, with a precision of 0.935 and a recall of 0.932. These findings highlight the system's capability to handle complex motion data effectively while maintaining high accuracy and reliability. The proposed method represents a technological advancement in fall detection systems, with promising potential for implementation in elderly monitoring systems. By improving safety and quality of life for older adults, this research contributes meaningfully to advancements in elderly care technology.
MES(manufacturing execution system) plays a critical role in improving production efficiency by managing operations across the entire manufacturing system. Conventional manufacturing systems employ a centralized control structure, which has limitations in terms of the flexibility, scalability and reconfigurability of the manufacturing system. Agent-based manufacturing systems, on the other hand, are better suited to dynamic environments due to their inherent high autonomy and reconfigurability. In this study, we propose an agent-based MES and present its collaboration model between agents along with a data structure. The agent-based MES consists of three types of core agents: WIPAgent, PAgent(processing agent), and MHAgent(material handling agent). The entire manufacturing execution process operates through collaboration among these core agents, and all collaboration is carried out through autonomous interactions between the agents. In particular, the order-by-order dispatching process and the WIP(work-in-process) routing process are represented as respective collaboration models to facilitate understanding and analyzing the processes. In addition, we define data specifications required for MES implementation and operation, and their respective structures and relationships. Moreover, we build a prototype system employing a simulation model of an exemplary shop-floor as a simulation test bed. The framework proposed in this study can be used as a basis for building an automated operating system in a distributed environment.
Bearing-shaft systems are essential components in various automated manufacturing processes, primarily designed for the efficient rotation of a main shaft by a motor. Accurate fault detection is critical for operating manufacturing processes, yet challenges remain in sensor selection and optimization regarding types, locations, and positioning. Sound signals present a viable solution for fault detection, as microphones can capture mechanical sounds from remote locations and have been traditionally employed for monitoring machine health. However, recordings in real industrial environments always contain non-negligible ambient noise, which hampers effective fault detection. Utilizing a high-performance microphone for noise cancellation can be cost-prohibitive and impractical in actual manufacturing sites, therefore to address these challenges, we proposed a convolution neural network-based methodology for fault detection that analyzes the mechanical sounds generated from the bearing-shaft system in the form of Log-mel spectrograms. To mitigate the impact of environmental noise in recordings made with commercial microphones, we also developed a denoising autoencoder that operates without requiring any expert knowledge of the system. The proposed DAE-CNN model demonstrates high performance in fault detection regardless of whether environmental noise is included(98.1%) or not(100%). It indicates that the proposed methodology effectively preserves significant signal features while overcoming the negative influence of ambient noise present in the collected datasets in both fault detection and fault type classification.
As digital transformation accelerates, platform business has become a core business model in modern society. Platform business has a network effect where the winner takes all. For this reason, it is crucial for a company's pricing policy to attract as many customers as possible in the early stages of business. Telecommunication service companies are experiencing stagnant growth due to the saturation of the smartphone market and intensifying competition in rates, but the burden of maintaining communication networks is increasing due to the rapid increase in traffic caused by domestic and foreign CSPs. This study aims to understand the dynamic characteristics of the telecommunications market by focusing on pricing policy. To this end, we analyzed how ISPs, CSPs, and consumers react to changes in pricing policy based on the prisoner's dilemma theory. The analysis of the dynamic characteristics of the market was conducted through simulation using the Agent-Based Model.
This study develops a machine learning-based tool life prediction model using spindle power data collected from real manufacturing environments. The primary objective is to monitor tool wear and predict optimal replacement times, thereby enhancing manufacturing efficiency and product quality in smart factory settings. Accurate tool life prediction is critical for reducing downtime, minimizing costs, and maintaining consistent product standards. Six machine learning models, including Random Forest, Decision Tree, Support Vector Regressor, Linear Regression, XGBoost, and LightGBM, were evaluated for their predictive performance. Among these, the Random Forest Regressor demonstrated the highest accuracy with R2 value of 0.92, making it the most suitable for tool wear prediction. Linear Regression also provided detailed insights into the relationship between tool usage and spindle power, offering a practical alternative for precise predictions in scenarios with consistent data patterns. The results highlight the potential for real-time monitoring and predictive maintenance, significantly reducing downtime, optimizing tool usage, and improving operational efficiency. Challenges such as data variability, real-world noise, and model generalizability across diverse processes remain areas for future exploration. This work contributes to advancing smart manufacturing by integrating data-driven approaches into operational workflows and enabling sustainable, cost-effective production environments.
This study proposes a weight optimization technique based on Mixture Design of Experiments (MD) to overcome the limitations of traditional ensemble learning and achieve optimal predictive performance with minimal experimentation. Traditional ensemble learning combines the predictions of multiple base models through a meta-model to generate a final prediction but has limitations in systematically optimizing the combination of base model performances. In this research, MD is applied to efficiently adjust the weights of each base model, constructing an optimized ensemble model tailored to the characteristics of the data. An evaluation of this technique across various industrial datasets confirms that the optimized ensemble model proposed in this study achieves higher predictive performance than traditional models in terms of F1-Score and accuracy. This method provides a foundation for enhancing real-time analysis and prediction reliability in data-driven decision-making systems across diverse fields such as manufacturing, fraud detection, and medical diagnostics.
This study focuses on the effectiveness of regional business support programs funded by South Korea's Balanced National Development Special Account, one of the policies designed to address regional imbalances and promote local autonomy. Using the analytical approach including DEA (Data Envelopment Analysis) methodology, This study analyzed the efficiency of 76 star companies in the Jeonbuk region based on their performance from 2018 to 2023. This study was designed to improve previous studies limitations, which only analyzed simple input-output efficiency in the short term, by using six years of mid-term data to comprehensively evaluate input variables in both R&D and Non-R&D sectors. The main purpose of this study is to analyze the effectiveness of the expiring Star Company Development Program by evaluating efficiency of supported company groups using DEA and to propose support models and policy suggestions for upcoming regional specialized industries support program by identifying the features of both optimal and inefficiency models. For this, employments along with financial indicators such as sales revenue, operating profit, and total assets were set as output variables, with R&D and non-R&D support amounts were set as input variables for analysis. According to the results, the optimal efficiency model group has strong intellectual property acquisition capabilities, and continuous R&D investment. It shows that continuous innovation activities are a key factor for improving the effectiveness of support. This study found that, from a mid․long term perspective, policy support programs should be customized by unique characteristics of each industry field, Based on this, it was suggested that upcoming regional specialized industry support programs in the Jeonbuk region should include policy planning and support program design to complement the weaknesses of each industry field.
A multi-criteria decision-making(MCDM) method allows the decision makers to systematically evaluate the alternatives based on a predefined set of decision criteria. The most commonly used MCDM methods include Analytic Hierarchy Process(AHP), Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS), Weighed Aggregated Sum Product Assessment(WASPAS), Preference Selection Index(PSI), etc. In MCDM Problems, it is common that performance ratings for different criteria are measured by different units. Normalization is thus used to convert performance ratings into commensurable data. There are many normalization techniques that can be used for MCDM problems. Much effort has been made for comparative studies on the suitability of normalization techniques used in MCDM methods. However, most studies present normalization methods suitable for specific MCDM problems based on specific data samples. Therefore, this study proposes the most suitable normalization method for each MCDM method under consideration using extensive data samples. A wide range of MCDM problems with various measurement scales are generated by simulation for comparative study. The experimental results show that vector normalization method is best suited for all MCDM methods considered in this study.
This study proposes a mathematical model to optimize the fighter aircraft-weapon combinations for the ROKAF(Republic Of Korea Air Force). With the recent emergence of the population declining issue in Republic of Korea, there is an urgent need for efficient weapon system operations in light of decreasing military personnel. In order to solve these issues, we consider to reduce the workload of pilots and maintenance personnel by operating an optimal number of weapons instead of deploying all possible armaments for each aircraft type. To achieve this, various factors for optimizing the fighter-weapon combinations were identified and quantified. A model was then constructed using goal programming, with the objective functions based on the compatibility, CEP(Circular Error Probable), and fire range of the weapons, along with the planned wartime mission-specific weapon ratios for each aircraft type. The experimental result's analysis of the proposed model indicate a significant increase in mission performance efficiency compared to the existing system in both operational and maintenance aspects. We hope that our model will be reflected to help improve the operational capabilities of Republic of Korea Air Force.
Sequential zone picking is an order picking method designed to enhance warehouse efficiency by dividing the storage area into multiple zones and picking items in a sequential order across these zones. Picked items are often placed in dedicated totes and transported between zones using a conveyor system, which manages the picking flow but can occasionally result in inefficiencies during the process. This study presents a variant of the sequential zone picking system, called a dual-lane zone picking system (DZP), which consists of two parallel conveyor lanes without buffers between consecutive zones. This conveyor configuration allows the picker in each zone to alternate processing between the two lanes, thereby lessening the constraints of tote transitions between zones and improving both system throughput and picker utilization. We design and conduct a series of experiments using a discrete-event simulation model to evaluate the performance of DZPs. The experiment results indicate that DZP surpasses the original single-lane zone picking system by shortening the system’s mean flow time in low flow intensity scenarios and achieving a higher maximum throughput and worker utilization in high flow intensity scenarios. Additionally, we investigate the effects of the number of zones and order batching size on the performance of DZP to gain further insights into the system’s operational control.
Rapidly changing environmental factors due to climate change are increasing the uncertainty of crop growth, and the importance of crop yield prediction for food security is becoming increasingly evident in Republic of Korea. Traditionally, crop yield prediction models have been developed by using statistical techniques such as regression models and correlation analysis. However, as machine learning technique develops, it is able to predict the crop yield more accurate than the statistical techniques. This study aims at proposing the onion yield prediction framework to accurately predict the onion yield by using various environmental factor data. Temperature, humidity, precipitation, solar radiation, and wind speed are considered as climate factors and irrigation water and nitrogen application rate are considered as soil factors. To improve the performance of the prediction model, ensemble learning technique is applied to the proposed framework. The coefficient of determination of the proposed stacked ensemble framework is 0.96, which is a 24.68% improvement over the coefficient of determination of 0.77 of the existing single machine learning model. This framework can be applied to the particular farmland so that each farm can get their customized prediction model, which is visualized by the web system.