New motor development requires high-speed load testing using dynamo equipment to calculate the efficiency of the motor. Abnormal noise and vibration may occur in the test equipment rotating at high speed due to misalignment of the connecting shaft or looseness of the fixation, which may lead to safety accidents. In this study, three single-axis vibration sensors for X, Y, and Z axes were attached on the surface of the test motor to measure the vibration value of vibration. Analog data collected from these sensors was used in classification models for anomaly detection. Since the classification accuracy was around only 93%, commonly used hyperparameter optimization techniques such as Grid search, Random search, and Bayesian Optimization were applied to increase accuracy. In addition, Response Surface Method based on Design of Experiment was also used for hyperparameter optimization. However, it was found that there were limits to improving accuracy with these methods. The reason is that the sampling data from an analog signal does not reflect the patterns hidden in the signal. Therefore, in order to find pattern information of the sampling data, we obtained descriptive statistics such as mean, variance, skewness, kurtosis, and percentiles of the analog data, and applied them to the classification models. Classification models using descriptive statistics showed excellent performance improvement. The developed model can be used as a monitoring system that detects abnormal conditions of the motor test.
In recent automated manufacturing systems, compressed air-based pneumatic cylinders have been widely used for basic perpetration including picking up and moving a target object. They are relatively categorized as small machines, but many linear or rotary cylinders play an important role in discrete manufacturing systems. Therefore, sudden operation stop or interruption due to a fault occurrence in pneumatic cylinders leads to a decrease in repair costs and production and even threatens the safety of workers. In this regard, this study proposed a fault detection technique by developing a time-variant deep learning model from multivariate sensor data analysis for estimating a current health state as four levels. In addition, it aims to establish a real-time fault detection system that allows workers to immediately identify and manage the cylinder’s status in either an actual shop floor or a remote management situation. To validate and verify the performance of the proposed system, we collected multivariate sensor signals from a rotary cylinder and it was successful in detecting the health state of the pneumatic cylinder with four severity levels. Furthermore, the optimal sensor location and signal type were analyzed through statistical inferences.
Fueled by international efforts towards AI standardization, including those by the European Commission, the United States, and international organizations, this study introduces a AI-driven framework for analyzing advancements in drone technology. Utilizing project data retrieved from the NTIS DB via the “drone” keyword, the framework employs a diverse toolkit of supervised learning methods (Keras MLP, XGboost, LightGBM, and CatBoost) enhanced by BERTopic (natural language analysis tool). This multifaceted approach ensures both comprehensive data quality evaluation and in-depth structural analysis of documents. Furthermore, a 6T-based classification method refines non-applicable data for year-on-year AI analysis, demonstrably improving accuracy as measured by accuracy metric. Utilizing AI’s power, including GPT-4, this research unveils year-on-year trends in emerging keywords and employs them to generate detailed summaries, enabling efficient processing of large text datasets and offering an AI analysis system applicable to policy domains. Notably, this study not only advances methodologies aligned with AI Act standards but also lays the groundwork for responsible AI implementation through analysis of government research and development investments.
This study introduces a novel approach for identifying potential failure risks in missile manufacturing by leveraging Quality Inspection Management (QIM) data to address the challenges presented by a dataset comprising 666 variables and data imbalances. The utilization of the SMOTE for data augmentation and Lasso Regression for dimensionality reduction, followed by the application of a Random Forest model, results in a 99.40% accuracy rate in classifying missiles with a high likelihood of failure. Such measures enable the preemptive identification of missiles at a heightened risk of failure, thereby mitigating the risk of field failures and enhancing missile life. The integration of Lasso Regression and Random Forest is employed to pinpoint critical variables and test items that significantly impact failure, with a particular emphasis on variables related to performance and connection resistance. Moreover, the research highlights the potential for broadening the scope of data-driven decision-making within quality control systems, including the refinement of maintenance strategies and the adjustment of control limits for essential test items.
As environmental concerns escalate, the increase in recycling of aluminum scrap is notable within the aluminum alloy production sector. Precise control of essential components such as Al, Cu, and Si is crucial in aluminum alloy production. However, recycled metal products comprise various metal components, leading to inherent uncertainty in component concentrations. Thus, meticulous determination of input quantities of recycled metal products is necessary to adjust the composition ratio of components. This study proposes a stable input determination heuristic algorithm considering the uncertainty arising from utilizing recycled metal products. The objective is to minimize total costs while satisfying the desired component ratio in aluminum manufacturing processes. The proposed algorithm is designed to handle increased complexity due to introduced uncertainty. Validation of the proposed heuristic algorithm's effectiveness is conducted by comparing its performance with an algorithm mimicking the input determination method used in the field. The proposed heuristic algorithm demonstrates superior results compared to the field-mimicking algorithm and is anticipated to serve as a useful tool for decision-making in realistic scenarios.
We examine a single machine scheduling problem with step-improving jobs in which job processing times decrease step-wisely over time according to their starting times. The objective is to minimize total completion time which is defined as the sum of completion times of jobs. The total completion time is frequently considered as an objective because it is highly related to the total time spent by jobs in the system as well as work-in-progress. Many applications of this problem can be observed in the real world such as data gathering networks, system upgrades or technological shock, and production lines operated with part-time workers in each shift. Our goal is to develop a scheduling algorithm that can provide an optimal solution. For this, we present an efficient branch and bound algorithm with an assignment-based node design and tight lower bounds that can prune branch and bound nodes at early stages and accordingly reduce the computation time. In numerical experiments well designed to consider various scenarios, it is shown that the proposed algorithm outperforms the existing method and can solve practical problems within reasonable computation time.
The use of big data needs to be emphasized in policy formulation by public officials in order to improve the transparency of government policies and increase efficiency and reliability of government policies. ‘Hye-Ahn’, a government-wide big data platform was built with this goal, and the subscribers of ‘Hye-Ahn’ has grown significantly from 2,000 at the end of 2016 to 100,000 at August 2018. Additionally, the central and local governments are expanding their big data related budgets. In this study, we derived the costs and benefits of ‘Hye-Ahn’ and used them to conduct an economic feasibility analysis. As a result, even if only some quantitative benefits are considered without qualitative benefits, the net present value, the benefit/ cost, and internal rate of return turned out to be 22,662 million won, 2.3213, and 41.8%, respectively. Since this is larger than the respective comparison criteria of 0 won, 1.0, and 5.0%, it can be seen that ‘Hye-Ahn’ has had economic feasibility. As noticed earlier, the number of analysis using ‘Hye-Ahn’ is increasing, so it is expected that the benefits will increase as time passes. Finally, the socioeconomic value gained when the results of analysis using ‘Hye-Ahn’ are used in policy is expected to be significant.
This paper presents a path planning optimization model for the engineering units to install obstacles in the shortest time during wartime. In a rapidly changing battlefield environment, engineering units operate various engineering obstacles to fix, bypass, and delay enemy maneuvers, and the success of the operation lies in efficiently planning the obstacle installation path in the shortest time. Existing studies have not reflected the existence of obstacle material storage that should be visited precedence before installing obstacles, and there is a problem that does not fit the reality of the operation in which the installation is continuously carried out on a regional basis. By presenting a Mixed Integrer Programming optimization model reflecting various constraints suitable for the battlefield environment, this study attempted to promote the efficient mission performance of the engineering unit during wartime.
The threat of North Korea's long-range firepower is recognized as a typical asymmetric threat, and South Korea is prioritizing the development of a Korean-style missile defense system to defend against it. To address this, previous research modeled North Korean long-range artillery attacks as a Markov Decision Process (MDP) and used Approximate Dynamic Programming as an algorithm for missile defense, but due to its limitations, there is an intention to apply deep reinforcement learning techniques that incorporate deep learning. In this paper, we aim to develop a missile defense system algorithm by applying a modified DQN with multi-agent-based deep reinforcement learning techniques. Through this, we have researched to ensure an efficient missile defense system can be implemented considering the style of attacks in recent wars, such as how effectively it can respond to enemy missile attacks, and have proven that the results learned through deep reinforcement learning show superior outcomes.
The purpose of this study is to analyze effect of Army Risk Assessment System(ARAS) which is used to prevent safety accident in ROK army. Based on prior research, we select 4 indicators which are related to accident prevention effect and analyze the differences before and after ARAS operation for each indicators by using Paired-Samples T-Test. Also, we analyze the correlation between degree of ARAS operation and status of safety accidents of 112 ROK Army units. We conduct an evaluation of each function within the system using IPA method. The results of this study are as follows; All 4 indicators are improved compared to before ARAS operation, and the differences are statistically significant. Also, there is negative correlation between the degree of ARAS operation and the occurrence of safety accidents. So, the operation of ARAS has a positive effect on preventing safety accidents. Finally among the 15 functions of ARAS, 4 functions require improvement. The findings of this study have implications for proposing necessity of computerized system in enforcing Risk Assessment. Also, whether or not operating ARAS is important, but it is also important to operate it well. Lastly, We propose improvement plans for each function to operate it well.
As the complexity and uncertainty of international construction projects increase, the importance of risk management capabilities in the construction industry has become more pronounced. Accordingly, Enterprise Risk Management (ERM) has become a widely adopted approach among organizations as a new way for more effective risk management. Despite its growing application, research related to ERM is still in its infancy, and most of the existing studies have been limited to financial industries. Therefore, this study aims to empirically examine the influence of ERM’s core elements on project risk management (PRM) and project performance within construction firms. Our findings indicate that the key ERM components—organization, policy, and culture—significantly enhance PRM processes, underscoring their critical role and importance. Additionally, effective PRM positively affects project outcomes, highlighting its significance for construction companies engaged in international projects. While ERM does not directly impact project performance, it indirectly improves outcomes through enhanced PRM capabilities. It suggests that ERM will contribute to the firm’s performance by improving the firm’s PRM capability through policies and a risk-focused culture corresponding to the adopted ERM organization and system..
With the advent of the 4th Industrial Revolution, changes in the market environment and employment environment are accelerating due to smart technological innovation, and securing professional manpower and developing human resources for domestic small and medium-sized enterprises is becoming very important. Recently, most of the domestic small and medium-sized enterprises are experiencing hiring difficulties, and the development and training of human resources to overcome this is still lacking in systemization, despite much support from the government. This reflects the reality that it is not easy to invest training costs and time to adapt new employees to small and medium-sized businesses. Based on these problems, the work-study parallel project was introduced to cultivate practical talent in small and medium-sized businesses. Work-study parallel training is carried out in the form of mentoring between corporate field teachers and learning workers in actual workplaces, and even if the training is the same, there are differences depending on the learner's attitude, learning motivation, and training achievement. Ego state is a theory that can identify personality types and has the advantage of being able to understand and acknowledge oneself and others and intentionally improve positive factors to induce optimized interpersonal relationships. Accordingly, the purpose of this study is to analyze the attitudes of learning workers, who are the actual subjects for improving the performance of work-study parallel projects and establishing a stable settlement within the company, based on their ego status. Through this study, we aim to understand the impact of the personality type of learning workers on training performance and to suggest ways to improve training performance through work-study parallelism.
In the manufacturing industry, dispatching systems play a crucial role in enhancing production efficiency and optimizing production volume. However, in dynamic production environments, conventional static dispatching methods struggle to adapt to various environmental conditions and constraints, leading to problems such as reduced production volume, delays, and resource wastage. Therefore, there is a need for dynamic dispatching methods that can quickly adapt to changes in the environment. In this study, we aim to develop an agent-based model that considers dynamic situations through interaction between agents. Additionally, we intend to utilize the Q-learning algorithm, which possesses the characteristics of temporal difference (TD) learning, to automatically update and adapt to dynamic situations. This means that Q-learning can effectively consider dynamic environments by sensitively responding to changes in the state space and selecting optimal dispatching rules accordingly. The state space includes information such as inventory and work-in-process levels, order fulfilment status, and machine status, which are used to select the optimal dispatching rules. Furthermore, we aim to minimize total tardiness and the number of setup changes using reinforcement learning. Finally, we will develop a dynamic dispatching system using Q-learning and compare its performance with conventional static dispatching methods.
This study emphasizes R&D as a management strategy for small and medium-sized manufacturing enterprises (SMEs) to achieve competitive advantage and aims to analyze the impact of innovation resistance, prior knowledge, and technological capability on the intention to adopt R&D. The research targeted 403 decision-makers from SMEs that have not adopted R&D. The analysis revealed the following key findings: As a result, both technical capabilities and prior knowledge had a negative effect on innovation resistance. In addition, technological capabilities and prior knowledge had a positive effect on adoption intention, and innovation resistance had a negative effect on acceptance intention. The indirect effects of technical capabilities and prior knowledge both had a positive impact. In addition, we tested whether dependency on partners and trading organizations that accepted R&D had a moderating effect, but it was not significant. The academic implications of this study provide a detailed analysis of how prior knowledge and technological capability affect innovation resistance in SMEs and verify the intention to adopt R&D. The practical implications suggest a direction for small and medium-sized enterprises to reduce innovation resistance in accepting R&D, and companies need to recognize the suitability of R&D and recognize the importance of technological capabilities and prior knowledge in order to reduce innovation resistance.
This study introduces and experimentally validates a novel approach that combines Instruction fine-tuning and Low-Rank Adaptation (LoRA) fine-tuning to optimize the performance of Large Language Models (LLMs). These models have become revolutionary tools in natural language processing, showing remarkable performance across diverse application areas. However, optimizing their performance for specific domains necessitates fine-tuning of the base models (FMs), which is often limited by challenges such as data complexity and resource costs. The proposed approach aims to overcome these limitations by enhancing the performance of LLMs, particularly in the analysis precision and efficiency of national Research and Development (R&D) data. The study provides theoretical foundations and technical implementations of Instruction fine-tuning and LoRA fine-tuning. Through rigorous experimental validation, it is demonstrated that the proposed method significantly improves the precision and efficiency of data analysis, outperforming traditional fine-tuning methods. This enhancement is not only beneficial for national R&D data but also suggests potential applicability in various other data-centric domains, such as medical data analysis, financial forecasting, and educational assessments. The findings highlight the method's broad utility and significant contribution to advancing data analysis techniques in specialized knowledge domains, offering new possibilities for leveraging LLMs in complex and resource- intensive tasks. This research underscores the transformative potential of combining Instruction fine-tuning with LoRA fine-tuning to achieve superior performance in diverse applications, paving the way for more efficient and effective utilization of LLMs in both academic and industrial settings.
This study focuses on the path planning algorithm for large-scale autonomous delivery using drones and robots in urban environments. When generating delivery routes in urban environments, it is essential that avoid obstacles such as buildings, parking lots, or any other obstacles that could cause property damage. A commonly used method for obstacle avoidance is the grid-based A* algorithm. However, in large-scale urban environments, it is not feasible to set the resolution of the grid too high. If the grid cells are not sufficiently small during path planning, inefficient paths might be generated when avoiding obstacles, and smaller obstacles might be overlooked. To solve these issues, this study proposes a method that initially creates a low-resolution wide-area grid and then progressively reduces the grid cell size in areas containing registered obstacles to maintain real-time efficiency in generating paths. To implement this, obstacles in the operational area must first be registered on the map. When obstacle information is updated, the cells containing obstacles are processed as a primary subdivision, and cells closer to the obstacles are processed as a secondary subdivision. This approach is validated in a simulation environment and compared with the previous research according to the computing time and the path distance.
Distribution and logistics industries contribute some of the biggest GDP(gross domestic product) in South Korea and the number of related companies are quarter of the total number of industries in the country. The number of retail tech companies are quickly increased due to the acceleration of the online and untact shopping trend. Furthermore, major distribution and logistics companies try to achieve integrated data management with the fulfillment process. In contrast, small and medium distribution companies still lack of the capacity and ability to develop digital innovation and smartization. Therefore, in this paper, a deep learning-based demand forecasting & recommendation model is proposed to improve business competitiveness. The proposed model is developed based on real sales transaction data to predict future demand for each product. The proposed model consists of six deep learning models, which are MLP(multi-layers perception), CNN(convolution neural network), RNN(recurrent neural network), LSTM(long short term memory), Conv1D-BiLSTM(convolution-long short term memory) for demand forecasting and collaborative filtering for the recommendation. Each model provides the best prediction result for each product and recommendation model can recommend best sales product among companies own sales list as well as competitor’s item list. The proposed demand forecasting model is expected to improve the competitiveness of the small and medium-sized distribution and logistics industry.
Due to the complexity of urban area, the city vehicle routing problem has been a difficult problem. The problem has involved factors such as parking availability, road conditions, and traffic congestion, all of which increase transportation costs and delivery times. To resolve this problem, one effective solution can be the use of parcel lockers located near customer sites, where products are stored for customers to pick up. When a vehicle delivers products to a designated parcel locker, customers in the vicinity must pick up their products from that locker. Recently, identifying optimal locations for these parcel lockers has become an important research issue. This paper addresses the parcel locker location problem within the context of urban traffic congestion. By considering dynamic environmental factors, we propose a Markov decision process model to tackle the city vehicle routing problem. To ensure more real situations, we have used optimal paths for distances between two nodes. Numerical results demonstrate the viability of our model and solution strategy.
Vertical takeoff and landing (VTOL) is a core feature of unmanned aerial vehicles (UAVs), which are commonly referred to as drones. In emerging smart logistics, drones are expected to play an increasingly important role as mobile platforms. Therefore, research on last-mile delivery using drones is on the rise. There is a growing trend toward providing drone delivery services, particularly among retailers that handle small and lightweight items. However, there is still a lack of research on a structural definition of the VTOL drone flight model for multi-point delivery service. This paper describes a VTOL drone flight route structure for a multi-drone delivery service using rotary-wing type VTOL drones. First, we briefly explore the factors to be considered when providing drone delivery services. Second, a VTOL drone flight route model is introduced using the idea of the nested graph. Based on the proposed model, we describe various time-related attributes for delivery services using drones and present corresponding calculation methods. Additionally, as an application of the drone route model and the time attributes, we comprehensively describe a simple example of the multi-drone delivery for first-come-first-served (FCFS) services.
By the recent fast growth of e-commerce markets, it has been stimulated to study order picking systems to improve their efficiency in distribution centers. Many companies and researchers have been developed various types of order picking systems and pursued the corresponding optimal operation policies. However, the performances of the systems with the optimal policies often depend on the structures of the centers and the operation environments. Based on a simulation model that mimics a unique zone picking system operated by a real company in the Republic of Korea, this study compares several operation policies and finds the most appropriate order selection rule and worker assignment policy for the system. Under all scenarios considered in this study, simulation results show that it is recommendable to assign more efficient workers to the zones with heavier workload. It also shows that selecting the order with the maximum number of non-repeatedly visited zones from the order list provides the most consistent and stable performances with respect to flow time, makespan, and utilization of the system even under the scenario with the breakdown zones. On the other hand, selecting the order with the minimum ratio of penalty to the number of zones performs the worst in all scenarios considered.