Defective product data is often very few because it is difficult to obtain defective product data while good product data is rich in manufacturing system. One of the frequently used methods to resolve the problems caused by data imbalance is data augmentation. Data augmentation is a method of increasing data from a minor class with a small number of data to be similar to the number of data from a major class with a large number of data. BAGAN-GP uses an autoencoder in the early stage of learning to infer the distribution of the major class and minor class and initialize the weights of the GAN. To resolve the weight clipping problem where the weights are concentrated on the boundary, the gradient penalty method is applied to appropriately distribute the weights within the range. Data augmentation techniques such as SMOTE, ADASYN, and Borderline-SMOTE are linearity-based techniques that connect observations with a line segment and generate data by selecting a random point on the line segment. On the other hand, BAGAN-GP does not exhibit linearity because it generates data based on the distribution of classes. Considering the generation of data with various characteristics and rare defective data, MO1 and MO2 techniques are proposed. The data is augmented with the proposed augmentation techniques, and the performance is compared with the cases augmented with existing techniques by classifying them with MLP, SVM, and random forest. The results of MO1 is good in most cases, which is believed to be because the data was augmented more diversely by using the existing oversampling technique based on linearity and the BAGAN-GP technique based on the distribution of class data, respectively.
The injection molding process is a process in which thermoplastic resin is heated and made into a fluid state, injected under pressure into the cavity of a mold, and then cooled in the mold to produce a product identical to the shape of the cavity of the mold. It is a process that enables mass production and complex shapes, and various factors such as resin temperature, mold temperature, injection speed, and pressure affect product quality. In the data collected at the manufacturing site, there is a lot of data related to good products, but there is little data related to defective products, resulting in serious data imbalance. In order to efficiently solve this data imbalance, undersampling, oversampling, and composite sampling are usally applied. In this study, oversampling techniques such as random oversampling (ROS), minority class oversampling (SMOTE), ADASYN(Adaptive Synthetic Sampling), etc., which amplify data of the minority class by the majority class, and complex sampling using both undersampling and oversampling, are applied. For composite sampling, SMOTE+ENN and SMOTE+Tomek were used. Artificial neural network techniques is used to predict product quality. Especially, MLP and RNN are applied as artificial neural network techniques, and optimization of various parameters for MLP and RNN is required. In this study, we proposed an SA technique that optimizes the choice of the sampling method, the ratio of minority classes for sampling method, the batch size and the number of hidden layer units for parameters of MLP and RNN. The existing sampling methods and the proposed SA method were compared using accuracy, precision, recall, and F1 Score to prove the superiority of the proposed method.
Most of the predictions using machine learning are neutral predictions considering the symmetrical situation where the predicted value is not smaller or larger than the actual value. However, in some situations, asymmetric prediction such as over-prediction or under-prediction may be better than neutral prediction, and it can induce better judgment by providing various predictions to decision makers. A method called Asymmetric Twin Support Vector Regression (ATSVR) using TSVR(Twin Support Vector Regression), which has a fast calculation time, was proposed by controlling the asymmetry of the upper and lower widths of the ε-tube and the asymmetry of the penalty with two parameters. In addition, by applying the existing GSVQR and the proposed ATSVR, prediction using the prediction propensities of over-prediction, under-prediction, and neutral prediction was performed. When two parameters were used for both GSVQR and ATSVR, it was possible to predict according to the prediction propensity, and ATSVR was found to be more than twice as fast in terms of calculation time. On the other hand, in terms of accuracy, there was no significant difference between ATSVR and GSVQR, but it was found that GSVQR reflected the prediction propensity better than ATSVR when checking the figures. The accuracy of under-prediction or over-prediction was lower than that of neutral prediction. It seems that using both parameters rather than using one of the two parameters (p_1,p_2) increases the change in the prediction tendency. However, depending on the situation, it may be better to use only one of the two parameters.
With the spread of smart manufacturing, one of the key topics of the 4th industrial revolution, manufacturing systems are moving beyond automation to smartization using artificial intelligence. In particular, in the existing automatic machining, a number of machining defects and non-processing occur due to tool damage or severe wear, resulting in a decrease in productivity and an increase in quality defect rates. Therefore, it is important to measure and predict tool life. In this paper, v-ASVR (v-Asymmetric Support Vector Regression), which considers the asymmetry of є-tube and the asymmetry of penalties for data out of є-tube, was proposed and applied to the tool wear prediction problem. In the case of tool wear, if the predicted value of the tool wear amount is smaller than the actual value (under-estimation), product failure may occur due to tool damage or wear. Therefore, it can be said that v-ASVR is suitable because it is necessary to overestimate. It is shown that even when adjusting the asymmetry of є-tube and the asymmetry of penalties for data out of є-tube, the ratio of the number of data belonging to є-tube can be adjusted with v. Experiments are performed to compare the accuracy of various kernel functions such as linear, polynomial. RBF (radialbasis function), sigmoid, The best result isthe use of the RBF kernel in all cases
Fair Allocation of profits or costs arising from joint participation by multiple individuals or entities with different purposes is essential for their continuing involvement and for their dissatisfaction reduction. In this research, fair allocation of the profits of forming a grand coalition in Three-Echelon Supply Chain (TESC) game that is composed of manufacturer, distributor and retailer, is studied. In particular, the solutions of the proportional method of profit, the proportional method of marginal profit, and Shapley value based on cooperative game theory are proved to be in the desirable characteristics of the core. The proportional method of profit and the proportional method of marginal profit are often used because of their ease of application. These methods distribute total profit in proportion to profits or marginal profits of each game participant. In addition, Shapley value can be defined as the average marginal profit when one game player is added at a time. Even though the calculation of the average of all possible marginal profits is not simple, Shapley value are often used as a useful method. Experiments have shown that the solution of the incremental method, which calculates the marginal cost of adding game players in the order of manufacturers, distributors and retailers, does not exist in the core.
The cooperative game theory consists of a set of players and utility function that has positive values for a subset of players, called coalition, in the game. The purpose of cost allocation method is to allocate the relevant cost among game players in a fair and rational way. Therefore, cost allocation method based on cooperative game theory has been applied in many areas for fair and reasonable cost allocation. On the other hand, the desirable characteristics of the cost allocation method are Pareto optimality, rationality, and marginality. Pareto optimality means that costs are entirely paid by participating players. Rationality means that by joining the grand coalition, players do not pay more than they would if they chose to be part of any smaller coalition of players. Marginality means that players are charged at least enough to cover their marginal costs. If these characteristics are all met, the solution of cost allocation method exists in the core. In this study, proportional method is applied to EOQ inventory game and EPQ inventory game with shortage. Proportional method is a method that allocates costs proportionally to a certain allocator. This method has been applied to a variety of problems because of its convenience and simple calculations. However, depending on what the allocator is used for, the proportional method has a weakness that its solution may not exist in the core. Three allocators such as demand, marginal cost, and cost are considered. We prove that the solution of the proportional method to demand and the proportional method to marginal cost for EOQ game and EPQ game with shortage is in the core. The counterexample also shows that the solution of the proportional method to cost does not exist in the core.
The modular assembly system can make it possible for the variety of products to be assembled in a short lead time. In this system, necessary components are assembled to optional components tailor to customers’ orders. Budget for inventory investments composed of inventory and purchasing costs are practically limited and the purchasing cost is often paid when an order is arrived. Service cost is assumed to be proportional to service level and it is included in budget constraint. We develop a heuristic procedure to find a good solution for a continuous review inventory system of the modular assembly system with a budget constraint. A regression analysis using a quadratic function based on the exponential function is applied to the cumulative density function of a normal distribution. With the regression result, an efficient heuristics is proposed by using an approximation for some complex functions that are composed of exponential functions only. A simple problem is introduced to illustrate the proposed heuristics.
If service level is increased, then service cost such as labor cost and facility cost will be increased. This service cost is included in the budget constraint in this paper. This service cost makes the problem difficult to solve. The purpose of this research is to develop an efficient approach for a continuous review inventory system with budget constraint when the semi-finished product and optional components are required to be assembled. Assuming that the reorder point for semi-finished product is given, order quantity for semi-finished product and order quantity and reorder point will be determined to minimize total cost that includes setup cost, inventory holding cost, and shortage cost. The performance of the proposed approach is checked by using an example.
Inventory centralization for a number of stores may reduce inventory costs by establishing and maintaining a central ordering and distribution point. In this study, a centralized warehouse problem with multi-item and capacity constraint is considered. The objective of this study is to develop a methodology to decide the number and location of centralized warehouses and determine order quantity(Q), reorder point(r) of each centralized warehouse to minimize holding, setup, penalty, and transportation costs. In this problem, each centralized warehouse uses continuous review inventory policy and its budget is limited. A SA(Simulated Annealing) approach for this problem is developed.
In this paper a modified simulated annealing approach for solving single-machine mean tardiness scheduling problems is proposed. The results of the simulation indicate that the proposed method provides more stable solutions than those of previous studies. The proposed method also provides better quality solutions for large-size problems.