간행물

한국산업경영시스템학회지 KCI 등재 Journal of Society of Korea Industrial and Systems Engineering

권호리스트/논문검색
이 간행물 논문 검색

권호

Vol. 38 No. 3 (2015년 9월) 20

1.
2015.09 구독 인증기관 무료, 개인회원 유료
This research is to develop a possible process to apply k-means clustering to an efficient vehicle routing process under time varying vehicle moving speeds. Time varying vehicle moving speeds are easy to find in metropolitan area. There is a big difference between the moving time requirements of two specific delivery points. Less delivery times are necessary if a delivery vehicle moves after or before rush hours. Various vehicle moving speeds make the efficient vehicle route search process extremely difficult to find even for near optimum routes due to the changes of required time between delivery points. Delivery area division is designed to simplify this complicated VRPs due to time various vehicle speeds. Certain divided area can be grouped into few adjacent divisions to assume that no vehicle speed change in each division. The vehicle speeds moving between two delivery points within this adjacent division can be assumed to be same. This indicates that it is possible to search optimum routes based upon the distance between two points as regular traveling salesman problems. This makes the complicated search process simple to attack since few local optimum routes can be found and then connects them to make a complete route. A possible method to divide area using k-means clustering is suggested and detailed examples are given with explanations in this paper. It is clear that the results obtained using the suggested process are more reasonable than other methods. The suggested area division process can be used to generate better area division promising improved vehicle route generations.
4,000원
2.
2015.09 구독 인증기관 무료, 개인회원 유료
One of the most usual indicators to measure the performance of any inventory policy is the mean physical stock. In general, when estimating the mean physical stock in periodic review inventory systems, approximate approaches are often utilized by practitioners and researchers. The mean physical stock is generally calculated by a simple approximation. Still these simple methods are frequently used to analyze various single stockpoint and multi-echelon inventory systems. However, such a simple approximation can be very inaccurate. This is particularly true for low service levels. Even though exact methods to calculate the mean physical stock have been derived, they are available for specific cases only and computationally not very efficient, and therefore less useful in practice. In literature, approximate approaches, such as the simple, the linear, and Simpson approximations, were derived for the periodic review inventory systems that allow backorders. This paper modifies the approximate approaches for the lost sales case and evaluates the modified approximate approaches. Through computational experiments, average (and maximum) percentage deviations of mean physical stock between the exact method and the modified approximations are compared in the periodic review inventory system with lost sales. The same comparison between the modified and the original approximations are also conducted, in order to examine the performance of modified approximations. The results show that all modified approximations perform well for high service levels, but also that the performance may deteriorate fast with decreasing service level. The modified Simpson approximation is clearly better. In addition, the comparison between the modified and the original approximations in the periodic review inventory system with lost sales shows that the modified approximation outperforms the original approximation.
4,000원
3.
2015.09 구독 인증기관 무료, 개인회원 유료
It is important to understand psychological and physiological responses of occupants who seated in a chair in order to shape a comfortable indoor official environment. So it is needed to find out optimal seated conditions. The purpose of this study was to explore optimal condition of seat air conditioning control based on psychological or subjective responses (perceived temperature and comfort sensation) and physiological responses (heartrate variability; HRV). To do this, experimental conditions were designed by the difference of indoor temperature and seat air conditioning temperature. In the experiment 1, seven experimental conditions were designed with one control condition which was not used seat air conditioning system, and six experimental conditions which the difference of indoor temperature and seat air conditioning temperature (-1℃~-6℃). In the experiment 2, four experimental conditions were designed with one control condition and three experimental conditions (-3℃~-5℃). In addition, participants’ psychological or subjective response was measured by CSV (comfort sensation vote) and PTS (perceived temperature sensitivity) as a psychological or subjective response, and heartrate variability was measured as a physiological response. As a result, in the experiment 1, it was reported that the optimal conditions of seat air conditioning control based on participants’ psychological or subjective comfort were from -3℃ to -5℃ experimental conditions. In addition, in the experiment 2, it was reported that the optimal condition of seat air conditioning control based on participants’ physiological comfort was -4℃ experimental condition. These results suggested that seat air conditioning could affected to comfort sensation of occupants in an appropriate range, rather than unconditionally.
4,000원
4.
2015.09 구독 인증기관 무료, 개인회원 유료
As film products are increasingly used in a wide range of areas, from producing traditional flexible packaging to high-tech electronic products, a higher level of quality is demanded. Most film products are made in the form of rolled finished goods, therefore, various quality issues related to their shape characteristics must be addressed. The thickness of the film products is one of the most common and important critical-to-quality attributes (CTQs). Particularly, the degree of thickness uniformity is more important than other thickness parameters, because it will be potential causes of many secondary thickness-related quality problems, such as wrinkles or faulty windings. To control the degree of thickness uniformity, the fixed bending region is oneof the most important CTQs to manage. Fixed bending regions are special points in the transverse direction of a rolled product with consistent minute variations of the thickness gap. This paper describes the measurement and analysis of thickness uniformity data, which were performed in a real manufacturing field of biaxial oriented polypropylene (BOPP) film. In previous researches, quality function deployment (QFD) or fault tree analysis were used to find the most critical process attributes out to controlthe CTQ of thickness uniformity. Whereas, this paper uses traditional control charts to find the most critical process attributes out in this problem. In addition, the selection of one of the major critical process attributes (CTPs) that is expected to affect the CTQ of thickness uniformity is also described. The selected critical-to-process attributes are the controlled temperatures along the transverse direction. A dramatic improvement in thickness uniformity was observed when the selected CTPs were controlled.
4,000원
5.
2015.09 구독 인증기관 무료, 개인회원 유료
In this paper, scale efficiencies and relative efficiencies of R&D projects in Industrial Technology Program, sponsored by Ministry of Trade, Industry and Energy, Korea, are calculated and compared. For the process, various DEA (Data Envelopment Analysis) models are adopted as major techniques. For DEA, two stage input oriented models are utilized for calculating the efficiencies. Next, the calculated efficiencies are grouped according to their subprograms (Industrial Material, IT Fusion, Nano Fusion, Energy Resources, and Resources Technology) and recipient types (Public Enterprise, Large Enterprise, Medium Enterprise, Small Enterprise, Lab., Univ., and etc.) respectively. Then various subprograms and recipient types are compared in terms of scale efficiencies (CCR models) and relative efficiencies (BCC models). In addition, the correlation between the 1st stage relative efficiencies and the 2nd stage relative efficiencies is calculated, from which the causal relationship between them can be inferred. Statistical analysis shows that the amount of input, in general, should increase in order to be scale efficient (CCR models) regardless of the subprograms and recipient types, that the 1st and 2nd stage relative efficiencies are different in terms of the programs and recipient types (BCC models), and that there is no significant correlation between the 1st stage relative efficiencies and the 2nd stage relative efficiencies. However, the results should be used only as reference because the goal each and every subprogram has is different and the situation each and every recipient type faces is different. In addition, the causal link between the 1st stage relative efficiencies and the 2nd relative efficiencies is not considered, which, in turn, is the limitation of this paper.
4,000원
6.
2015.09 구독 인증기관 무료, 개인회원 유료
In this study, granting the optimal loan limit on SME (Small and Medium Enterprise) loans of financial institutions was proposed using the traditional newsvendor model. This study was the first domestic case study that applied the newsvendor model that was mainly used to calculate the optimum order quantity under some uncertain demands to the calculation of the loan limit (debt ceiling) of institutions. The method presented in this study made it possible to calculate the loan limit (debt ceiling) to maximize the revenue of a financial institution using probability functions, applied the newsvendor model setting the order volume of merchandise goods as the loan product order volume of the financial institution, and proposed, through the analysis of empirical data, the availability of additional loan to the borrower and the reduction of the debt ceiling and a management method for the recovery of the borrower who could not generate profit. In addition, the profit based loan money management model presented in this study also demonstrated that it also contributed to some extent to the prediction of the bankruptcy of the borrowing SME (Small and Medium Enterprise), as well as the calculation of the loan limit based on profit, by deriving the result values that the borrowing SME (Small and Medium Enterprise) actually went through bankruptcy at later times once the model had generated a signal of loan recovery for them during the validation of empirical data. accordingly, The method presented in this study suggested a methodology to generated a signal of loan recovery to reduce the losses by the bankruptcy.
4,000원
7.
2015.09 구독 인증기관 무료, 개인회원 유료
Nanotechnology has been growing constantly and it is becoming the leading technology in scientific research and development. Although nanotechnology has important applications in broad variety of fields without boundary of any particular industrial area, the study of nanotechnology related to its commercialization has been conducted in a few ways. To put that figure in context, this study investigates public and expert perceptions about negative potentials of nanotechnology. Through a series of surveys with public (N = 541) and experts (N = 62), we analyzed about public willingness to pay for nano-applied products. Survey results showed that public and experts preferred nano-applied products in the order of electronics, cosmetics, and food and medicine. Experts express high payment intention to electronics rather than public intention. In addition, the survey results showed the purchasing intention of both public and expert group was affected by the attributes of nano-applied products in the order of risk fatality, risk chance, certification, and labeling. But experts put more importance in risk fatality than risk chance comparing to public. Through the case analysis of the effects of labeling and certification, we revealed either labeling or certification can induce both public and experts to buy the nano-applied products with high risk chance and low risk fatality. However, for the nano-applied product with high risk fatality and low risk chance, both labeling and certification are simultaneously required to make customers have positive purchasing intention. The result of this study could be utilized for the nanotechnology-based company to get the consumer behavior information about nano-based product and to establish their marketing strategy.
4,000원
8.
2015.09 구독 인증기관 무료, 개인회원 유료
Several fields of science have demanded large-scale workflow support, which requires thousands of CPU cores or more. In order to support such large-scale scientific workflows, high capacity parallel systems such as supercomputers are widely used. In order to increase the utilization of these systems, most schedulers use backfilling policy: Small jobs are moved ahead to fill in holes in the schedule when large jobs do not delay. Since an estimate of the runtime is necessary for backfilling, most parallel systems use user’s estimated runtime. However, it is found to be extremely inaccurate because users overestimate their jobs. Therefore, in this paper, we propose a novel system for the runtime prediction based on workload-aware clustering with the goal of improving prediction performance. The proposed method for runtime prediction of parallel applications consists of three main phases. First, a feature selection based on factor analysis is performed to identify important input features. Then, it performs a clustering analysis of history data based on self-organizing map which is followed by hierarchical clustering for finding the clustering boundaries from the weight vectors. Finally, prediction models are constructed using support vector regression with the clustered workload data. Multiple prediction models for each clustered data pattern can reduce the error rate compared with a single model for the whole data pattern. In the experiments, we use workload logs on parallel systems (i.e., iPSC, LANL-CM5, SDSC-Par95, SDSC-Par96, and CTC-SP2) to evaluate the effectiveness of our approach. Comparing with other techniques, experimental results show that the proposed method improves the accuracy up to 69.08%.
4,000원
9.
2015.09 구독 인증기관 무료, 개인회원 유료
An effective method for produce munitions effectiveness data is to calculate weapon effectiveness indices in the US military’s Joint Munitions Effectiveness Manuals (JMEM) and take advantage of the damage evaluation model (GFSM) and weapon Effectiveness Evaluation Model (Matrix Evaluator). However, a study about the Range Safety that can be applied in the live firing exercises is very insufficient in the case of ROK military. The Range Safety program is an element of the US Army Safety Program, and is the program responsible for developing policies and guidance to ensure the safe operation of live-fire ranges. The methodology of Weapon Danger Zone (WDZ) program is based on a combination of weapon modeling/simulation data and actual impact data. Also, each WDZ incorporates a probability distribution function which provides the information necessary to perform a quantitative risk assessment to evaluate the relative risk of an identified profile. A study of method to establish for K-Range Safety data is to develop manuals (pamphlet) will be a standard to ensure the effective and safe fire training at the ROK military education and training and environmental conditions. For example, WDZs are generated with the WDZ tool as part of the RMTK (Range Managers Tool Kit) package. The WDZ tool is a Geographic Information System-based application that is available to operational planners and range safety manager of Army and Marine Corps in both desktop and web-based versions. K-Range Safety Program based on US data is reflected in the Korean terrain by operating environments and training doctrine etc, and the range safety data are made. Thus, verification process on modified variables data is required. K-Range Safety rather than being produced by a single program, is an package safety activities and measures through weapon danger zone tool, SRP (The Sustainable Range Program), manuals, doctrine, terrain, climate, military defence M&S, weapon system development/operational test evaluation and analysis to continuously improving range safety zone. Distribution of this K-range safety pamphlet is available to Army users in electronic media only and is intended for the standing army and army reserve. Also publication and distribution to authorized users for marine corps commands are indicated in the table of allowances for publications. Therefore, this study proposes an efficient K-Range Safety Manual producing to calculate the danger zones that can be applied to the ROK military’s live fire training by introducing of US Army weapons danger zone program and Range Safety Manual
4,600원
10.
2015.09 구독 인증기관 무료, 개인회원 유료
Up-to-date business environment for manufacturers is very complex and rapidly changing. In other words, companies are facing a variety of changes, such as diversifying customer requirements, shortening product life cycles, and switching to small quantity batch production. In this situation, the companies are introducing the concept of JIT (just-in-time) to solve the problem of on-time production and on-time delivery for survival. Though many companies have introduced ERP (enterprise resource planning) systems and MRP (material requirement planning) systems, the performance of these systems seems to fall short of expectations. In this paper, the case study on introducing an APS (advanced planning and scheduling) system based on dispatching rules to a machining company and on finding a method to establish an efficient production schedule is presented. The case company has trouble creating an effective production plan and schedule, even though it is equipped with an MRP-based ERP system. The APS system is applied to CNC (computer numerical control) machines, which are key machines of the case company. The overall progress of this research is as follows. First, we collect and analyze the master data on individual products and processes of the case company in order to build a production scheduling model. Second, we perform a pre-allocation simulation based on dispatching rules in order to calculate the priority of each order. Third, we perform a set of production simulations applying the priority value in order to evaluate production lead time and tardiness of pre-defined dispatching rules. Finally, we select the optimal dispatching rule suitable for work situation of the case company. As a result, an improved production schedule leads to an increase in production and reduced production lead time.
4,000원
11.
2015.09 구독 인증기관 무료, 개인회원 유료
Recently, management based on statistical data has become a big issue and the importance of the statistics has been emphasized for the management innovation in the defense area. However, the Military Management based on the statistics is hard to expect because of the shortage of the statistics in the military. There are many military information systems having great many data created in real time. Since the infrastructure for gathering data form the many systems and making statistics by using gathered data is not equipped, the usage of the statistics is poor in the military. The Analytical Defense Statistics System is designed to improve effectively the defense management in this study. The new system having the sub-systems of Data Management, Analysis and Service can gather the operational data from interlocked other Defense Operational Systems and produce Defense Statistics by using the gathered data beside providing statistics services. Additionally, the special function for the user oriented statistics production is added to make new statistics by handling many statistics and data. The Data Warehouse is considered to manage the data and Online Analytical Processing tool is used to enhance the efficiency of the data handling. The main functions of the R, which is a well-known analysis program, are considered for the statistical analysis. The Quality Management Technique is applied to find the fault from the data of the regular and irregular type. The new Statistics System will be the essence of the new technology like as Data Warehouse, Business Intelligence, Data Standardization and Statistics Analysis and will be helpful to improve the efficiency of the Military Management.
4,000원
12.
2015.09 구독 인증기관 무료, 개인회원 유료
The IRR(internal rate of return) is often used by investors for the evaluation of engineering projects. Unfortunately, it has serial flaws: (1) multiple real-valued IRRs may arise; (2) complex-valued IRRs may arise; (3) the IRR is, in special cases, incompatible with the net present value (NPV) in accept/reject decisions. The efforts of management scientists and economists in providing a reliable project rate of return have generated over the decades an immense amount of contributions aiming to solve these shortcomings. Especially, multiple internal rate of returns (IRRs) have a fatal flaw when we decide to accep it or not. To solve it, some researchers came up with external rate of returns (ERRs) such as ARR (Average Rate of Return) or MIRR (MIRR, Modified Internal Rate of Return). ARR or MIRR. will also always yield the same decision for a engineering project consistent with the NPV criterion. The ERRs are to modify the procedure for computing the rate of return by making explicit and consistent assumptions about the interest rate at which intermediate receipts from projects may be invested. This reinvestment could be either in other projects or in the outside market. However, when we use traditional ERRs, a volume of capital investment is still unclear. Alternatively, the productive rate of return (PRR) can settle these problems. Generally, a rate of return is a profit on an investment over a period of time, expressed as a proportion of the original investment. The time period is typically the life of a project. The PRR is based on the full life of the engineering project. but has been annualised to project one year. And the PRR uses the effective investment instead of the original investment. This method requires that the cash flow of an engineering project must be separated into ‘investment’ and ‘loss’ to calculate the PRR value. In this paper, we proposed a tabulated form for easy calculation of the PRR by modifing the profit and loss statement, and the cash flow statement.
4,000원
13.
2015.09 구독 인증기관 무료, 개인회원 유료
Recently as the manufacturers want competitiveness in dynamically changing environment, they are trying a lot of efforts to be efficient with their production systems, which may be achieved by diminishing unplanned operation stops. The operation stops and maintenance cost are known to be significantly decreased by adopting proper maintenance strategy. Therefore, the manufacturers were more getting interested in scheduling of exact maintenance scheduling to keep smooth operation and prevent unexpected stops. In this paper, we proposedan integrated maintenance approach in injection molding manufacturing line. It consists of predictive and preventive maintenance approach. The predictive maintenance uses the statistical process control technique with the real-time data and the preventive maintenance is based on the checking period of machine components or equipment. For the predictive maintenance approach, firstly, we identified components or equipment that are required maintenance, and then machine parameters that are related with the identified components or equipment. Second, we performed regression analysis to select the machine parameters that affect the quality of the manufactured products and are significant to the quality of the products. By this analysis, we can exclude the insignificant parameters from monitoring parameters and focus on the significant parameters. Third, we developed the statistical prediction models for the selected machine parameters. Current models include regression, exponential smoothing and so on. We used these models to decide abnormal patternand to schedule maintenance. Finally, for other components or equipment which is not covered by predictive approach, we adoptedpreventive maintenance approach. To show feasibility we developed an integrated maintenance support system in LabView Watchdog Agent and SQL Server environment and validated our proposed methodology with experimental data.
4,000원
14.
2015.09 구독 인증기관 무료, 개인회원 유료
A missile defense system is composed of radars detecting incoming missiles aiming at defense assets, command control units making the decisions on weapon target assignment, and artillery batteries firing of defensive weapons to the incoming missiles. Although, the technology behind the development of radars and weapons is very important, effective assignment of the weapons against missile threats is much more crucial. When incoming missile targets toward valuable assets in the defense area are detected, the asset-based weapon target assignment model addresses the issue of weapon assignment to these missiles so as to maximize the total value of surviving assets threatened by them. In this paper, we present a model for an asset-based weapon assignment problem with shoot-look-shoot engagement policy and fixed set-up time between each anti-missile launch from each defense unit. Then, we show detailed linear approximation process for nonlinear portions of the model and propose final linear approximation model. After that, the proposed model is applied to several ballistic missile defense scenarios. In each defense scenario, the number of incoming missiles, the speed and the position of each missile, the number of defense artillery battery, the number of anti-missile in each artillery battery, single shot kill probability of each weapon to each target, value of assets, the air defense coverage are given. After running lpSolveAPI package of R language with the given data in each scenario in a personal computer, we summarize its weapon target assignment results specified with launch order time for each artillery battery. We also show computer processing time to get the result for each scenario.
4,000원
15.
2015.09 구독 인증기관 무료, 개인회원 유료
This paper is to analyze an integrated production and inventory model in a single-vendor multi-buyer supply chain. The vendor is defined as the manufacturer and the buyers as the retailers. The product that the manufacturer produces is supplied to the retailers with constant periodic time interval. The production rate of the manufacturer is constant for the time. The demand of the retailers is constant for the time. The cycle time of the vendor is defined as the elapsed time from the start of the production to the start of the next production, while the cycle times of the buyer as the elapsed time between the adjacent supply times from the vendor to the buyer. The cycle times of the vendor and the buyers that minimizes the total cost in a supply chain are analyzed. The cost factors are the production setup cost and the inventory holding cost of the manufacturer, the ordering cost and the inventory holding cost of the retailers. The cycle time of the vendor is investigated through the cycle time that satisfies economic production quantity with the production setup cost and the inventory holding cost of the manufacturer. An integrated production and inventory model is formulated, and an algorithm is developed. An numerical example is presented to explain the algorithm. The solution of the algorithm for the numerical examples is compared with that of genetic algorithm. Numerical example shows that the vendor and the buyers can save cost by integrated decision making.
4,000원
16.
2015.09 구독 인증기관 무료, 개인회원 유료
Universal design is defined as the design process of products and environments usable by all people to the greatest extent possible, without the need for adaptation or specialized design. The benefits of universal design have been promoted primarily through illustrative ‘success stories’of public, residential and occupational environments and products. While case examples may be informative, they may unfortunately be limited in terms of generality to other designs or tasks. Therefore, design methods and criteria that can be applied systematically in a range of situations to encourage universal design are needed. In addition, the seven principles of universal design are intended to guide the design process. The principles provide a framework that allows a systematic evaluation of new or existing designs and assists in educating both designers and consumers about the characteristics of more usable products and environments. However, exactly how these principles are incorporated into the design process has beenleft up to the designer. Since the introduction of universal design, designers have become familiar with the principles of universal design, and they have developed many products based on universal design. However, the principles of universal design are qualitative, which means designers cannot quantitatively evaluate their designs. Some have worked to develop more systematic ways to evaluate products and environments by providing design guidelines for each of the principles. However, recommendations have not yet been made regarding how to integrate performance measures of universal design into the product design process before the product is mass produced. Furthermore, there are sets of requirements regarding each user group that has different age and ability. Consequently, there is an urgent need for design methods, based on a better understanding of age and ability related factors, which will lead to a universally designed product or environment. The authors have proposed the PSD (Preference Set-Based Design) method that can generate a ranged set of feasible solutions (i.e., robust and flexible solution set) instead of single point solution that satisfies changing sets of design targets. The objective of this paper is to develop a general method for systematically supporting the universal design process. This paper proposes the applicability of PSD method to universal design. Here, the proposed method is successfully illustrated with a universal design problem.
4,000원
17.
2015.09 구독 인증기관 무료, 개인회원 유료
We analyzed the effect of three different types of inventory systems for saving the total cost using simulation on the system where multiple depots and many retailers disperse on the limited area. Three types of inventory systems are single echelon system with inventory exchange and two-echelon system and the variant two-echelon system. Variant two echelon system is the two-echelon system where the inventory transshipmentsare allowed on every two stage inventory echelons. Inventories kept on every retailer are commonly used for all retailers when certain retailer has stock-out. And when all retailers are stock-out, inventories kept on every depot are commonly used for the retailers whose assigned depots are stock-out. These all three systems are simulated with the constraint of service level on wide range of parameter settings. Simulation results show that cost saving effect appear clear for single echelon system and two-echelon system when shortage cost portion and transportation cost portion becomes large respectively irrespective of depot number. Variant two echelon system seems to be superior to two other systems when transportationcost portion becomes very small. But this superiority is not proved in terms of statistics. So we may conclude that the variant two echelon system may be useless with the higher administrative efforts due to frequent inventory exchange. Also we note that the traditional two echelon system becomes inferior to two other systems in terms of statistics when service level becomes high or when demand variance becomes very large. And inventory integration effect that cost becomes saved when depot number decrease, diminishes when transportation cost or stock-out cost increases irrespective of inventory systems.
4,500원
18.
2015.09 구독 인증기관 무료, 개인회원 유료
The purpose of this paper is to investigate the determinants of new product diffusion. We seek to document and explain systematic features of product diffusion. In this essay, we examine the well-documented empirical regularity that the speed of diffusion has accelerated during the twentieth century. The empirical results show that the main source of acceleration are faster declines in prices. Faster price declines make the product affordable to more consumers within a given period of time. Based on theories of intertemporal price discrimination and learning-by-doing, the association between the speed of adoption and the speed of price decline was explained. Faster price declines are attributed to several product characteristics as well as changes in income distribution. Above all, the introduction of consumer electronic products in more recent years can be regarded as the most important factor in accelerating price declines. Consumer electronic products are technologically different from non-electronic goods, in that semiconductors are important components. As the price of semiconductors has dropped rapidly, the falling production costs can be rapidly incorporated to the price of consumer electronic goods. Furthermore, most of the recently introduced consumer electronic products have network externalities, and many products with network externalities require complementary products. A complementary product becomes more readily or cheaply available as more people have the main product. One major difference between previous studies and this study is that the former focuses only on the factors that operate directly on the speed of adoption, while this study incorporated factors that work through price changes as well as the factors that work directly on the speed of adoption.
4,000원
19.
2015.09 구독 인증기관 무료, 개인회원 유료
Aggregate Production Planning determines levels of production, human resources, inventory to maximize company’s profits and fulfill customer's demands based on demand forecasts. Since performance of aggregate production planning heavily depends on accuracy of given forecasting demands, choosing an accurate forecasting method should be antecedent for achieving a good aggregate production planning. Generally, typical forecasting error metrics such as MSE (Mean Squared Error), MAD (Mean Absolute Deviation), MAPE (Mean Absolute Percentage Error), and CFE (Cumulated Forecast Error) are utilized to choose a proper forecasting method for an aggregate production planning. However, these metrics are designed only to measure a difference between real and forecast demands and they are not able to consider any results such as increasing cost or decreasing profit caused by forecasting error. Consequently, the traditional metrics fail to give enough explanation to select a good forecasting method in aggregate production planning. To overcome this limitation of typical metrics for forecasting method this study suggests a new metric, WACFE (Weighted Absolute and Cumulative Forecast Error), to evaluate forecasting methods. Basically, the WACFE is designed to consider not only forecasting errors but also costs which the errors might cause in for Aggregate Production Planning. The WACFE is a product sum of cumulative forecasting error and weight factors for backorder and inventory costs. We demonstrate the effectiveness of the proposed metric by conducting intensive experiments with demand data sets from M3-competition. Finally, we showed that the WACFE provides a higher correlation with the total cost than other metrics and, consequently, is a better performance in selection of forecasting methods for aggregate production planning.
4,000원
20.
2015.09 구독 인증기관 무료, 개인회원 유료
Recently, scheduling problems with position-dependent processing times have received considerable attention in the literature, where the processing times of jobs are dependent on the processing sequences. However, they did not consider cases in which each processed job has different learning or aging ratios. This means that the actual processing time for a job can be determined not only by the processing sequence, but also by the learning/aging ratio, which can reflect the degree of processing difficulties in subsequent jobs. Motivated by these remarks, in this paper, we consider a two-agent single-machine scheduling problem with linear job-dependent position-based learning effects, where two agents compete to use a common single machine and each job has a different learning ratio. Specifically, we take into account two different objective functions for two agents: one agent minimizes the total weighted completion time, and the other restricts the makespan to less than an upper bound. After formally defining the problem by developing a mixed integer non-linear programming formulation, we devise a branch-and-bound (B&B) algorithm to give optimal solutions by developing four dominance properties based on a pairwise interchange comparison and four properties regarding the feasibility of a considered sequence. We suggest a lower bound to speed up the search procedure in the B&B algorithm by fathoming any non-prominent nodes. As this problem is at least NP-hard, we suggest efficient genetic algorithms using different methods to generate the initial population and two crossover operations. Computational results show that the proposed algorithms are efficient to obtain near-optimal solutions.
4,300원