In the modern industrial period, the introduction of mass production was most important progress in civilization. Die-casting process is one of main methods for mass production in the modern industry. The aluminum die-casting in the mold filling process is very complicated where flow momentum is the high velocity of the liquid metal. Actually, it is almost impossible in complex parts exactly to figure the mold filling performance out with the experimental knowledge. The aluminum die-castings are important processes in the automotive industry to produce the lightweight automobile bodies. Due to this condition, the simulation is going to be more critical role in the design procedure. Simulation can give the best solution of a casting system and also enhance the casting quality. The cost and time savings of the casting layout design are the most advantage of Computer Aided Engineering (CAE)..
Generally, the relations of casting conditions such as injection system, gate system, and cooling system should be considered when designing the casting layout. Due to the various relative matters of the above conditions, product defects such as defect extent and location are significantly difference. In this research by using the simulation software (AnyCasting), CAE simulation was conducted with three layout designs to find out the best alternative for the casting layout design of an automotive Oil Pan_BJ3E. In order to apply the simulation results into the production die-casting mold, they were analyzed and compared carefully. Internal porosities which are caused by air entrapments during the filling process were predicted and also the results of three models were compared with the modifications of the gate system and overflows. Internal porosities which are occurred during the solidification process are predicted with the solidification analysis. And also the results of the modified gate system are compared.
The vehicle routing problem is one of the vibrant research problems for half a century. Many studies have extensively studied the vehicle routing problem in order to deal with practical decision-making issues in logistics. However, developments of new logistics strategies have inevitably required investigations on solution methods for solving the problem because of computational complexity and inherent constraints in the problem. For this reason, this paper suggests a simulated annealing (SA) algorithm for a variant of vehicle routing problem introduced by a previous study. The vehicle routing problem is a multi-depot and multi-trip vehicle routing problem with multiple heterogeneous vehicles restricted by the maximum permitted weight and the number of compartments. The SA algorithm generates an initial solution through a greedy-type algorithm and improves it using an enhanced SA procedure with three local search methods. A series of computational experiments are performed to evaluate the performance of the heuristic and several managerial findings are further discussed through scenario analyses. Experiment results show that the proposed SA algorithm can obtain good solutions within a reasonable computation time and scenario analyses show that a transportation system visiting non-dedicated factories shows better performance in truck management in terms of the numbers of vehicles used and trips for serving customer orders than another system visiting only dedicated factories.
The flow of the 4th Industrial Revolution calls for the innovation of the traditional business models of the manufacturers. Servitization is a corporate strategy to respond to changes in the business environment. These days, the value that the market demands can be created on the basis of the product-service integration. Thus the manufacturers must pursue the fundamental innovation of the current strategy and business models. It is necessary to create common values with customers through providing product-service integrated offerings beyond the development, production, and delivery. The purpose of this study is to develop the evaluation indicators for selecting suppliers when the manufacturer who offers the value of product-service integration needs to obtain the resources from outside. The case company in this study is the manufacture firm conducting the retail IoT business as a new business. The Delphi method is used to develop the evaluation indicators for selecting suppliers. This study suggests the academic implications providing the perspective of Servitizaiton by using Delphi method, and the practical implications applying the creating value method of Servitization by collecting the opinions from both value providers and value consumers in the process of developing the evaluation indicators.
This study focuses on a job-shop scheduling problem with the objective of minimizing total tardiness for the job orders that have different due dates and different process flows. We suggest the dispatching rule based scheduling algorithm to generate fast and efficient schedule. First, we show the delay schedule can be optimal for total tardiness measure in some cases. Based on this observation, we expand search space for selecting the job operation to explore the delay schedules. That means, not only all job operations waiting for process but also job operations not arrived at the machine yet are considered to be scheduled when a machine is available and it is need decision for the next operation to be processed. Assuming each job operation is assigned to the available machine, the expected total tardiness is estimated, and the job operation with the minimum expected total tardiness is selected to be processed in the machine. If this job is being processed in the other machine, then machine should wait until the job arrives at the machine. Simulation experiments are carried out to test the suggested algorithm and compare with the results of other well-known dispatching rules such as EDD, ATC and COVERT, etc. Results show that the proposed algorithm, MET, works better in terms of total tardiness of orders than existing rules without increasing the number of tardy jobs.
Although retailers deals with a large number of single-term inventory items, but few cases have been considered in the areas of practical decision making. However, recent moves to strengthen fair trade have created a real need for single-period inventory decision-making problems. This study addresses the problem of ordering quantity decisions that are expected to maximize profits using classical newsvendor models. The research target is data on seasonal and perishable products from retail. We also use data from retailers to actually apply the newsvendor model and calculate the results to compare performance. It also suggests solutions for estimating demand for products sold in order to apply newsvendor models that utilize actual demand ratio versus forecast demand. This study would like to examine the effectiveness of this research through data analysis and make some suggestions for applying it to reality.
This paper develops an algorithm to determine the batch size of the batch process in real time for improving production and efficient control of production system with multiple processes and batch processes. It is so important to find the batch size of the batch process, because the variability arising from the batch process in the production system affects the capacity of the production. Specifically, batch size could change system efficiency such as throughput, WIP (Work In Process) in production system, batch formation time and so on. In order to improve the system variability and productivity, real time batch size determined by considering the preparation time and batch formation time according to the number of operation of the batch process. The purpose of the study is to control the WIP by applying CONWIP production system method in the production line and implements an algorithm for a real time batch size decision in a batch process that requires long work preparation time and affects system efficiency. In order to verify the efficiency of the developed algorithm that determine the batch size in a real time, an existed production system with fixed the batch size will be implemented first and determines that batch size in real time considering WIP in queue and average lead time in the current system. To comparing the efficiency of a system with a fixed batch size and a system that determines a batch size in real time, the results are analyzed using three evaluation indexes of lead time, throughput, and average WIP of the queue.
The manufacturing companies under Make-To-Order (MTO) production environment face highly variable requirements of the customers. It makes them difficult to establish preemptive production strategy through inventory management and demand forecasting. Therefore, the ability to establish an optimal production schedule that incorporates the various requirements of the customers is emphasized as the key success factor.
In this study, we suggest a process of designing the simulation model for establishing production schedule and apply this model to the case of a flat glass processing company. The flat glass manufacturing industry is under MTO production environment. Academic research of flat glass industry is focused on minimizing the waste in the cutting process. In addition, in the practical view, the flat glass manufacturing companies tend to establish the production schedule based on the intuition of production manager and it results in failure of meeting the due date. Based on these findings, the case study aims to present the process of drawing up a production schedule through simulation modeling. The actual data of Korean flat glass processing company were used to make a monthly production schedule. To do this, five scenarios based on dispatching rules are considered and each scenario is evaluated by three key performance indicators for delivery compliance. We used B2MML (Business To Manufacturing Markup Language) schema for integrating manufacturing systems and simulations are carried out by using SIMIO simulation software. The results provide the basis for determining a suitable production schedule from the production manager's perspective.
As many existing research fields are matured academically, researchers have encountered numbers of academic, social and other problems that cannot be addressed by internal knowledge and methodologies of existing disciplines. Earlier, pioneers of researchers thus are following a new paradigm that breaks the boundaries between the prior disciplines, fuses them and seeks new approaches. Moreover, developed countries including Korea are actively supporting and fostering the convergence research at the national level. Nevertheless, there is insufficient research to analyze convergence trends in national R&D support projects and what kind of content the projects mainly deal with. This study, therefore, collected and preprocessed the research proposal data of National Research Foundation of Korea, transforming the proposal documents to term-frequency matrices. Based on the matrices, this study derived detailed research topics through Latent Dirichlet Allocation, a kind of topic modeling algorithm. Next, this study identified the research topics each proposal mainly deals with, visualized the convergence relationships, and quantitatively analyze them. Specifically, this study analyzed the centralities of the detailed research topics to derive clues about the convergence of the near future, in addition to visualizing the convergence relationship and analyzing time-varying number of research proposals per each topic. The results of this study can provide specific insights on the research direction to researchers and monitor domestic convergence R&D trends by year.
A wireless sensor network is emerging technology and intelligent wireless communication paradigm that is dynamically aware of its surrounding environment. It is also able to respond to it in order to achieve reliable and efficient communication. The dynamical cognition capability and environmental adaptability rely on organizing dynamical networks effectively. However, optimally clustering the cognitive wireless sensor networks is an NP-complete problem.
The objective of this paper is to develop an optimal sensor network design for maximizing the performance. This proposed Ranking Artificial Bee Colony (RABC) is developed based on Artificial Bee Colony (ABC) with ranking strategy. The ranking strategy can make the much better solutions by combining the best solutions so far and add these solutions in the solution population when applying ABC. RABC is designed to adapt to topological changes to any network graph in a time. We can minimize the total energy dissipation of sensors to prolong the lifetime of a network to balance the energy consumption of all nodes with robust optimal solution. Simulation results show that the performance of our proposed RABC is better than those of previous methods (LEACH, LEACH-C, and etc.) in wireless sensor networks. Our proposed method is the best for the 100 node-network example when the Sink node is centrally located.
The Cooperative Engagement Capability (CEC) System produces a synergy between the sensors and shooters that are used on various platforms by integrating them. Even the US Navy has been recently adopting the CEC system that maximizes the effectiveness of the air defense operations by efficiently coordinating the dispersed air defense assets. The Navy of other countries are conducting research studies on the theory and application methods for the CEC system. The ROK Navy has limited air defense capabilities due to its independent weapons systems on battle ships. Therefore, the ROK Navy is currently going through a phase where research on proving the validity of building the CEC system because it will provide a way to overcome the limit of the platform based air defense capability. In this study, our goal is to propose methods that maximize the air defense capability of ROK Navy, identify the available assets for constructing the CEC system, and estimate effects of the CEC system when it is applied to the naval operations. In addition, we will provide a simple model that was developed to estimate these effects and a case study with virtual data to demonstrate the effects of the system when it is applied to the naval operations. The research result of this study will provide a way for building the basis of the Korean CEC system.
Recently, the manufacturing process system in the industrial field has become more and more complex and has been influenced by many and various factors. Moreover, these factors have the dependent correlation rather than independent of each other. Therefore, the statistical analysis has been extended from the univariate method to the multivariate method. The process capability indices have been widely used as statistical tools to assess the manufacturing process performance. Especially, the multivariate process indices need to be enhanced with more useful information and extensive application in the recent industrial fields. The various multivariate process capability indices have been studying by many researchers in recent years. Hence, the purpose of the study is to compare the useful and various multivariate process capability indices through the simulation. Among them, we compare the useful models of several multivariate process capability indices such as MCpm, MC+pm and MCpl. These multivariate process capability indices are incorporates both the process variation and the process deviation from target or consider the expected loss caused by the process deviation from target. Through the computational examples, we compare these process capability indices and discuss their usefulness and effectiveness.
Small and Medium-sized Enterprises (SMEs) are being faced with rapid changes in their business environments due to evolution of technologies and innovation in societal eco-systems. Particularly, dynamic interactions between such environments and enterprise activities have become significant, so technology planning, which is a process of identifying appropriate directions regarding product and technology development, has received much attention to cope with such dynamics proactively. However, SMEs typically have limits in performing independent, strategical and systematical technology planning activities due to the lack of human, material and financial resources. This paper proposes the development of a product roadmapping method so that SMEs carry out efficient technology planning activities with interconnections of external business environments. The present work provides product roadmap templates that directly accommodate the influence of business environments on the product’s system and its associated super/sub-systems with the use of external environment analysis techniques including TRIZ methodology, PEST and 5Forces analysis. These templates are useful to efficiently forecast the directions of product’s development and evolution, which arise from changes in external environments. Consequently, the present work enables SMEs to flexibly cope with the era of the next R&D generation, which pursues value creation through mutual interconnection between business environments and technology development.
In this study, we proposed a model for forecasting power energy demand by investigating how outside temperature at a given time affected power consumption and. To this end, we analyzed the time series of power consumption in terms of the power spectrum and found the periodicities of one day and one week. With these periodicities, we investigated two time series of temperature and power consumption, and found, for a given hour, an approximate linear relation between temperature and power consumption. We adopted an exponential smoothing model to examine the effect of the linearity in forecasting the power demand. In particular, we adjusted the exponential smoothing model by using the variation of power consumption due to temperature change. In this way, the proposed model became a mixture of a time series model and a regression model. We demonstrated that the adjusted model outperformed the exponential smoothing model alone in terms of the mean relative percentage error and the root mean square error in the range of 3%~8% and 4kWh~27kWh, respectively. The results of this study can be used to the energy management system in terms of the effective control of the cross usage of the electric energy together with the outside temperature.
There are two methods for evaluating two or more mutually exclusive projects. One is a total investment approach and the other is an incremental investment approach. The former can rank projects by the criterion of the net present value, but the latter can’t do it. An incremental investment approach is only possible when all pairwise alternatives are compared. Thus an incremental investment approach is superior in ranking them over an incremental investment approach. To do so, a principle of comparison must be established. Comparisons of profitability are reasonable when operating the same amount of investment over the same period of time. One principle is that all projects are invested in the largest of the projects. Another principle is that all projects are invested during the longest project life of the projects. In this paper, even if the principle is followed, it will be shown that the external rate of return fails to rank them. However, the productive rate of return criterion would prove to be able to rank them like the net present value standard, provided that the principle of comparison is kept. In addition, rate of returns can be assessed so that all mutually exclusive projects can be compared at once, such as on the criterion of the net present value. That is, it can be also compared with many other returns, such as the profit rates on financial investments or real investments.
We focus on the weapon target assignment and fire scheduling problem (WTAFSP) with the objective of minimizing the makespan, i.e., the latest completion time of a given set of firing operations. In this study, we assume that there are m available weapons to fire at n targets (> m). The artillery attack operation consists of two steps of sequential procedure : assignment of weapons to the targets; and scheduling firing operations against the targets that are assigned to each weapon. This problem is a combination of weapon target assignment problem (WTAP) and fire scheduling problem (FSP). To solve this problem, we define the problem with a mixed integer programming model. Then, we develop exact algorithms based on a dynamic programming technique. Also, we suggest how to find lower bounds and upper bounds to a given problem. To evaluate the performance of developed exact algorithms, computational experiments are performed on randomly generated problems. From the results, we can see suggested exact algorithm solves problems of a medium size within a reasonable amount of computation time. Also, the results show that the computation time required for suggested exact algorithm can be seen to increase rapidly as the problem size grows. We report the result with analysis and give directions for future research for this study. This study is meaningful in that it suggests an exact algorithm for a more realistic problem than existing researches. Also, this study can provide a basis for developing algorithms that can solve larger size problems.
This study is about the process capability index (PCI). In this study, we introduce several indices including the index CPR and present the characteristics of the CPR as well as its validity. The difference between the other indices and the CPR is the way we use to estimate the standard deviation. Calculating the index, most indices use sample standard deviation while the index CPR uses range R. The sample standard deviation is generally a better estimator than the range R . But in the case of the panel process, the CPR has more consistency than the other indices at the point of non-conforming ratio which is an important term in quality control. The reason why the CPR using the range has better consistency is explained by introducing the concept of ‘flatness ratio’. At least one million cells are present in one panel, so we can’t inspect all of them. In estimating the PCI, it is necessary to consider the inspection cost together with the consistency. Even though we want smaller sample size at the point of inspection cost, the small sample size makes the PCI unreliable. There is ‘trade off’ between the inspection cost and the accuracy of the PCI. Therefore, we should obtain as large a sample size as possible under the allowed inspection cost. In order for CPR to be used throughout the industry, it is necessary to analyze the characteristics of the CPR . Because the CPR is a kind of index including subgroup concept, the analysis should be done at the point of sample size of the subgroup. We present numerical analysis results of CPR by the data from the random number generating method. In this study, we also show the difference between the CPR using the range and the CP which is a representative index using the sample standard deviation. Regression analysis was used for the numerical analysis of the sample data. In addition, residual analysis and equal variance analysis was also conducted.
In pursuing carbon emission reduction efforts, companies have focused for the most part on reducing emissions due to the more efficient equipment and facilities. However they overlook a significant source of carbon emissions, one that is driven by operational policies. Currently companies are looking for solutions to reduce carbon emissions associated with their operations. Operational adjustments, such as modifications in order quantities could an effective way in reducing carbon emissions in the supply chain. Also, Cap-and-Trade mechanism is generally accepted as on of the most effective market-based mechanism to reduce carbon emissions. In this paper, we investigate a supply chain with single manufacturer and multiple retailers multi-product inventory model under the cap-and-trade system incorporating the carbon emissions caused by transportation and warehousing activities. Also, we provide an iterative solution algorithm and derive the common order interval and the number of intervals for each product. We show by numerical example that the inventory model incorporating cap & trade mechanism can reduce total cost and carbon emissions compared to the classical inventory model. Using the numerical examples, we also investigates different carbon price on the performance of the inventory model.
The purposes of this study are first to identify the core competences of design leadership and second to propose the characteristic and effective leadership capabilities required in the design field. To propose the capabilities of reliable and meaningful core competency of design leadership, we identified core competencies through a group of experts from the top group of design leaders (Design executives, head of the design institute) rather than using a survey study based on unspecified designers. In addition, the unique interview method, AI (Appreciated Inquiry) was also used to share the actual experiences and philosophies of the interviewers, regardless of the core competencies. This study revealed that the core competences of design leadership were different from those of conventional general leadership and MOT (Management of Technology) leadership capabilities. Besides, it provided opportunities for leaders in other fields to understand and collaborate on characteristics of the design field, while demonstrating the necessity and justification of unique leadership traits in the field of design.