간행물

한국산업경영시스템학회지 KCI 등재 Journal of Society of Korea Industrial and Systems Engineering

권호리스트/논문검색
이 간행물 논문 검색

권호

Vol. 39 No. 2 (2016년 6월) 17

1.
2016.06 구독 인증기관 무료, 개인회원 유료
In this paper, we present a new way to derive the mean cycle time of the G/G/m failure prone queue when the loading of the system approaches to zero. The loading is the relative ratio of the arrival rate to the service rate multiplied by the number of servers. The system with low loading means the busy fraction of the system is low. The queueing system with low loading can be found in the semiconductor manufacturing process. Cluster tools in semiconductor manufacturing need a setup whenever the types of two successive lots are different. To setup a cluster tool, all wafers of preceding lot should be removed. Then, the waiting time of the next lot is zero excluding the setup time. This kind of situation can be regarded as the system with low loading. By employing absorbing Markov chain model and renewal theory, we propose a new way to derive the exact mean cycle time. In addition, using the proposed method, we present the cycle times of other types of queueing systems. For a queueing model with phase type service time distribution, we can obtain a two dimensional Markov chain model, which leads us to calculate the exact cycle time. The results also can be applied to a queueing model with batch arrivals. Our results can be employed to test the accuracy of existing or newly developed approximation methods. Furthermore, we provide intuitive interpretations to the results regarding the expected waiting time. The intuitive interpretations can be used to understand logically the characteristics of systems with low loading.
4,000원
2.
2016.06 구독 인증기관 무료, 개인회원 유료
Product liability (PL), which began enforcement in 2002, refers to the legal responsibility of the manufacturers or sellers (wholesales or retailers) for the property damage or bodily harm caused by their product. With a strong enforcement of the Product Liability (PL) Act, companies are required to structure and operate a response system to defend or prevent product accidents efficiently, but small and medium enterprises (SMEs) are unable to respond more aggressively due to limitation of management resources. In this manner, it is important to develop response strategies for SMEs to efficiently cope with the PL Act. In this paper, the PEST (Political, Economic, Social, and Technological) analysis is performed to reveal the impact of the PL Act on SMEs in macro-economic point of view. To formulate SME’s PL response strategy, SWOT analysis is performed to categorize each factors from PEST analysis and AHP is applied to identify the intensities of SWOT factors. The prioritized SWOT factor, results of PEST-SWOT-AHP analysis, are used to formulate SME’s PL response strategies. The study results are briefly summarized as follows. To reduce product defects, it is necessary for SMEs to formulate PL response strategies for each phase of the product life cycle by continuously collecting and analyzing PL cases in the same industry or for similar products. In addition, SMEs should invest more technological effort to ensure product safety. Further, SMEs should spread PL awareness to all staff members by training internal PL experts. Moreover, a SME should enroll in PL insurance and spread this information to its customers so that they become aware that the company is proactively conforming to the PL law.
4,000원
3.
2016.06 구독 인증기관 무료, 개인회원 유료
Maritime transport is now regarded as one of the main contributors to global climate change by virtue of its CO2 emissions. Meanwhile, slow steaming, i.e., slower ship speed, has become a common practice in the maritime industry so as to lower CO2 emissions and reduce bunker fuel consumption. The practice raised various operational decision issues in terms of shipping companies: how much ship speed is, how much to bunker the fuel, and at which port to bunker. In this context, this study addresses an operation problem in a shipping companies, which is the problem of determining the ship speed, bunkering ports, and bunkering amount at the ports over a given ship route to minimize the bunker fuel and ship time costs as well as the carbon tax which is a regulatory measure aiming at reducing CO2 emissions. The ship time cost is included in the problem because slow steaming increases transit times, which implies increased in-transit inventory costs in terms of shippers. We formulate the problem as a nonlinear lot-sizing model and suggest a Lagrangian heuristic to solve the problem. The performance of the heuristic algorithm is evaluated using the data obtained from reliable sources. Although the problem is an operational problem, the heuristic algorithm is used to address various strategic issues facing shipping companies, including the effects of bunker prices, carbon taxes, and ship time costs on the ship speed, bunkering amount and number of bunkering ports. For this, we conduct sensitivity analyses of these factors and finally discuss study findings.
4,000원
4.
2016.06 구독 인증기관 무료, 개인회원 유료
Recently, owing to the development of ICT industry and wide spread of smart phone, the number of people who use car sharing service are increased rapidly. Currently two-way car sharing system with same rental and return locations are mainly operated since this system can be easily implemented and maintained. Currently the demand of one-way car sharing service has increase explosively. But this system have several obstacle in operation, especially, vehicle stock imbalance issues which invoke vehicle relocation. Hence in this study, we present an optimization approach to depot location and relocation policy in one-way car sharing systems. At first, we modelled as mixed-integer programming models whose objective is to maximize the profits of a car sharing organization considering all the revenues and costs involved and several constraints of relocation policy. And to solve this problem efficiently, we proposed a new method based on particle swarm optimization, which is one of powerful meta-heuristic method. The practical usefulness of the approach is illustrated with a case study involving satellite cities in Seoul Metrolitan Area including several candidate area where this kind systems have not been installed yet and already operating area. Our proposed approach produced plausible solutions with rapid computational time and a little deviation from optimal solution obtained by CPLEX Optimizer. Also we can find that particle swarm optimization method can be used as efficient method with various constraints. Hence based on this results, we can grasp a clear insight into the impact of depot location and relocation policy schemes on the profitability of such systems.
4,000원
5.
2016.06 구독 인증기관 무료, 개인회원 유료
Control chart is representative tools of statistical process control (SPC). It is a graph that plotting the characteristic values from the process . It has two steps (or Phase). First step is a procedure for finding a process parameters. It is called PhaseⅠ. This step is to find the process parameters by using data obtained from in-controlled process. It is a step that the standard value was not determined. Another step is monitoring process by already known process parameters from PhaseⅠ. It is called Phase Ⅱ. These control chart is the process quality characteristic value for management, which is plotted dot whether the existence within the control limit or not. But, this is not given information about the economic loss that occurs when a product characteristic value does not match the target value. In order to meet the customer needs, company not only consider stability of the process variation but also produce the product that is meet the target value. Taguchi’s quadratic loss function is include information about economic loss that occurred by the mismatch the target value. However, Taguchi’s quadratic loss function is very simple quadratic curve. It is difficult to realistically reflect the increased amount of loss that due to a deviation from the target value. Also, it can be well explained by only on condition that the normal process. Spiring proposed an alternative loss function that called reflected normal loss function (RNLF). In this paper, we design a new control chart for overcome these disadvantage by using the Spiring’s RNLF. And we demonstrate effectiveness of new control chart by comparing its average run length (ARL) with x-R control chart and expected loss control chart (ELCC).
4,000원
6.
2016.06 구독 인증기관 무료, 개인회원 유료
In mobile communication systems, location management deals with the location determination of users in a network. One of the strategies used in location management is to partition the network into location areas. Each location area consists of a group of cells. The goal of location management is to partition the network into a number of location areas such that the total paging cost and handoff (or update) cost is a minimum. Finding the optimal number of location areas and the corresponding configuration of the partitioned network is a difficult combinatorial optimization problem. This cell grouping problem is to find a compromise between the location update and paging operations such that the cost of mobile terminal location tracking is a minimum in location area wireless network. In fact, this is shown to be an NP-complete problem in an earlier study. In this paper, artificial bee colony (ABC) is developed and proposed to obtain the best/optimal group of cells for location area planning for location management system. The performance of the artificial bee colony (ABC) is better than or similar to those of other population-based algorithms with the advantage of employing fewer control parameters. The important control parameter of ABC is only ‘Limit’ which is the number of trials after which a food source is assumed to be abandoned. Simulation results for 16, 36, and 64 cell grouping problems in wireless network show that the performance of our ABC is better than those alternatives such as ant colony optimization (ACO) and particle swarm optimization (PSO).
4,000원
7.
2016.06 구독 인증기관 무료, 개인회원 유료
As technology has developed and cost for data processing has reduced, big data market has grown bigger. Developed countries such as the United States have constantly invested in big data industry and achieved some remarkable results like improving advertisement effects and getting patents for customer service. Every company aims to achieve long-term survival and profit maximization, but it needs to establish a good strategy, considering current industrial conditions so that it can accomplish its goal in big data industry. However, since domestic big data industry is at its initial stage, local companies lack systematic method to establish competitive strategy. Therefore, this research aims to help local companies diagnose their big data capabilities through a reference model and big data capability assessment system. Big data reference model consists of five maturity levels such as Ad hoc, Repeatable, Defined, Managed and Optimizing and five key dimensions such as Organization, Resources, Infrastructure, People, and Analytics. Big data assessment system is planned based on the reference model’s key factors. In the Organization area, there are 4 key diagnosis factors, big data leadership, big data strategy, analytical culture and data governance. In Resource area, there are 3 factors, data management, data integrity and data security/privacy. In Infrastructure area, there are 2 factors, big data platform and data management technology. In People area, there are 3 factors, training, big data skills and business-IT alignment. In Analytics area, there are 2 factors, data analysis and data visualization. These reference model and assessment system would be a useful guideline for local companies.
4,000원
8.
2016.06 구독 인증기관 무료, 개인회원 유료
Recently, nanotechnology has grown as one of the leading science technology along with other converging technologies such as biology, information, medicine etc., bringing the continuous investment of the government in nano-related field. However, it is difficult to measure and evaluate the performance of the national research and development programs because of the multidimensional character of the expected outcomes. This study aims to measuring efficiency of the national nanotechnology research and development programs using DEA model. The decision making units are nine nano-related ministries including the Ministry of Science, ICT and Future Planning. The input variables are total expenditure, number of the programs and average expenditure per program. The output variables are science, technology and economic indicator, and the combination of these outputs are respectively measured as seven different DEA cases. The Ministry of Science, ICT and Future was the first efficient ministry in total technical efficiency. Ministry of Agriculture, Food and Rural Affairs and the Ministry of Food and Drug Safety were efficient in pure technical efficiency, when the Ministry of Commerce Industry and Energy took the first in the scale efficiency. The program efficiency was affected by organizational characteristics such as the institution’s scale, the concentration of the research paper or the patent, technology transfer or the commercialization. The result of this study could be utilized in development of the policy in the nanotechnology and the related field. Furthermore, it could be applied for the modification of expenditure management or the adjustment of the research and development programs’ input and output scale for each ministry.
4,000원
9.
2016.06 구독 인증기관 무료, 개인회원 유료
There can be included a variety of uncertainties in all measurement results whether we can perceive or not on the causes. These uncertainties may end up in lowering the reliability of measurement results and also deteriorate the level of quality. For the purpose, we tried to combine the strengths of measurement uncertainty and measurement system analysis together to present a practical flowchart so as to verify those potential variation factors in general measurement processes. As a case study, we did an experiment and gathered data on the length between two holes of an engine cylinder head which is a core part for vehicles with a coordinate measuring machine and estimated nine uncertainty factors of it. Consequently, it was identified that the four primary factors among the nine which were related to the measurement standard, random errors or spread of the repeat measurements, differences between the coefficients of thermal expansion and the environment especially had been the influence around the laboratory. Since it is impossible to analyze the equipment and appraisal variations respectively through the only measurement uncertainty, we have used the measurement system analysis following the flowchart. Showing the result of being just about 0.5 % lower for the appraisal variation, and the equipment variation occupied about 7% for the total Gage R&R. Through this research, we have come to a conclusion that much more detail analysis on variation factors can be possible to be identified in measurement processes by using the developed flowchart which is composed of measurement uncertainty and measurement system analysis. Therefore, we expect engineers who are involved in quality and measurements to utilize this developed method.
4,000원
10.
2016.06 구독 인증기관 무료, 개인회원 유료
As contemporary society has become more complicated, specialized, and segmented, people are experiencing more diverse types of stress. In particular, while several factors associated with job stress have been examined among nurses, who belong to a professional group, the existing research has made no quantitative assessments of stress that reflect temporal differences in individuals. Therefore, the aim of this study is to understand the effects of job stress on alpha-amylase with regard to the working hours of nurses, to assess the variations in jobs stress over time, and provide basic data to improve the quality of nursing services. Ninety nurses working in three shifts in general, emergency, and intensive care wards of a university hospital in D City participated in this study. Salivary alpha-amylase (SAA) was extracted and analyzed at two-hour intervals from 07:00 to 15:00 from nurses on the day shift and from 23:00 to 07:00 from those working the night shift. The SAA level was highest between 23:00 and 01:00 for nurses in general wards (mean±S.D. 39.00±14.88) and between 11:00 and 13:00 for those in both intensive care units and emergency wards (mean ± S.D. 67.50 ± 62.93 and mean ± S.D. 39.67±35.96, respectively). The characteristic variation in SAA was significant between 23:00 and 01:00 (p < 0.01) and for those in their fifties or older (p < 0.01). The activation ratio of alpha-amylase, a stress reactant, showed an increase when the sympathetic nervous system was activated by mental stress; in addition, job stress was manifested with the effect of awakening at different time segments and at different ages among the nurses. With the aim of raising the level of service based on the nurses maintaining their mental health, it is necessary to focus sharply on the time segment for critical control and to conduct repetitive studies to determine the divisions of eustress critical values as well as to expand the population.
4,000원
11.
2016.06 구독 인증기관 무료, 개인회원 유료
This paper considers the allocation and engagement scheduling problem of interceptor missiles, and the problem was formulated by using MIP (mixed integer programming) in the previous research. The objective of the model is the maximization of total intercept altitude instead of the more conventional objective such as the minimization of surviving target value. The concept of the time window was used to model the engagement situation and a continuous time is assumed for flying times of the both missiles. The MIP formulation of the problem is very complex due to the complexity of the real problem itself. Hence, the finding of an efficient optimal solution procedure seems to be difficult. In this paper, an efficient genetic algorithm is developed by improving a general genetic algorithm. The improvement is achieved by carefully analyzing the structure of the formulation. Specifically, the new algorithm includes an enhanced repair process and a crossover operation which utilizes the idea of the PSO (particle swarm optimization). Then, the algorithm is throughly tested on 50 randomly generated engagement scenarios, and its performance is compared with that of a commercial package and a more general genetic algorithm, respectively. The results indicate that the new algorithm consistently performs better than a general genetic algorithm. Also, the new algorithm generates much better results than those by the commercial package on several test cases when the execution time of the commercial package is limited to 8,000 seconds, which is about two hours and 13 minutes. Moreover, it obtains a solution within 0.13 ~33.34 seconds depending on the size of scenarios.
4,800원
12.
2016.06 구독 인증기관 무료, 개인회원 유료
As supply chains are globalized, multinational companies are trying to optimize distribution networks using a hub and spoke structure. In this hub and spoke network structure, multinational companies locate regional distribution centers at hub airports, which serve demands in their corresponding regions. Especially when customers put higher priority on the service lead-time, hinterlands of international hub airports become ideal candidate locations for the regional hub distribution centers. By utilizing excellent airport and logistics services from hub airports, regional distribution centers in the hub airports can match supply with demand efficiently. In addition, regional hub distribution centers may increase air cargo volume of each airport, which is helpful in the current extremely competitive airport industry. In this paper, we classified locational preferences into three primary categories including demand, service and risk and applied the analytic hierarchy process methodology to prioritize factors of locational preferences. Primary preference factors include secondary factors. Demand factor contains access to current and prospect markets. Service factor comprises airport and logistics perspectives. Service factor in terms of airport operations includes secondary factors such as airport service and connectivity. Service factor in terms of logistics operations contains infrastructure and logistics operations efficiency. Risk factor consists of country and business risks. We also evaluated competitiveness of Asian hub airports in terms of candidate location for regional hub distribution centers. The candidate hub airports include Singapore, Hong Kong, Shanghai, Narita and Incheon. Based on the analytic hierarchy process analysis, we derived strategic implications for hub airports to attract multinational companies’ regional hub distribution centers.
4,000원
13.
2016.06 구독 인증기관 무료, 개인회원 유료
Project management is a tool for smooth operation during a full cycle from the design to normal operation including the schedule, document, and budget management, and document management is an important work for big projects such as the JRTR (Jordan Research and Training Reactor). To manage the various large documents for a research reactor, a project management system was resolved, a project procedure manual was prepared, and a document control system was established. The ANSIM (Advanced Nuclear Safety Information Management) system consists of a document management folder, document container folder, project management folder, organization management folder, and EPC (Engineering, Procurement and Construction) document folder. First, the system composition is a computerized version of the Inter-office Correspondence (IOC), the Document Distribution for Agreement (DDA), Design Documents, and Project Manager Memorandum (PM Memo) works prepared for the research reactor design. Second, it reviews, distributes, and approves design documents in the system and approves those documents to register and supply them to the research reactor user. Third, it integrates the information of the document system-using organization and its members, as well as users’ rights regarding the ANSIM document system. Throughout these functions, the ANSIM system has been contributing to the vitalization of united research. Not only did the ANSIM system realize a design document input, data load, and search system and manage KAERI’s long-period experience and knowledge information properties using a management strategy, but in doing so, it also contributed to research activation and will actively help in the construction of other nuclear facilities and exports abroad.
4,000원
14.
2016.06 구독 인증기관 무료, 개인회원 유료
Use of ICT in Government has ability to improve service delivery to its citizens, and yet many developing countries have lagged behind in the implementation of e-Government. Many e-Government initiatives also failed to achieve their objectives in developing countries. This paper therefore aims to identify critical failure or success factors in Kenya, using Heeks’ Factor Model. A survey questionnaire was developed and data were collected and analyzed from officials and interested citizens. The analysis results enabled to highlight seven specific success and failure factors, and their constituent elements in Kenya. The Kenyan overall e-Government implementation score belongs to the Zone of Improvement (3.52 of total 5.0), which means partial success or failure. The enablers of e-Government projects are good strategy formulation, and internal and external drive, whereas main failures of e-Government are weak ICT infrastructure. The areas for improvement are project management, design, competencies and funding. Data analysis highlights both strengths and weaknesses for each factor or variable. In particular, Kenyan government excels at the drive for change by top to bottom government officers as well as external stakeholders, while the government officers who are using e-Government are satisfied with the availability of vision, strategy and plan of e-Government implementation. Both technologies and e-transactions laws were the worst of all the variables in e-Government implementation. Two areas should be improved using immediate corrective action. In-depth study reveals that government officers and citizens can’t fully use their laptop and mobile devices due to the lack of both ICT network and its operating technology, and legal system associated with the transaction of business information. Finally, the study ends up with recommendations for policy makers to shape the future of e-Government system in both developing and developed countries.
4,000원
15.
2016.06 구독 인증기관 무료, 개인회원 유료
To identify the cause of the error and maintain the health of system, an administrator usually analyzes event log data since it contains useful information to infer the cause of the error. However, because today’s systems are huge and complex, it is almost impossible for administrators to manually analyze event log files to identify the cause of an error. In particular, as OpenStack, which is being widely used as cloud management system, operates with various service modules being linked to multiple servers, it is hard to access each node and analyze event log messages for each service module in the case of an error. For this, in this paper, we propose a novel message-based log analysis method that enables the administrator to find the cause of an error quickly. Specifically, the proposed method 1) consolidates event log data generated from system level and application service level, 2) clusters the consolidated data based on messages, and 3) analyzes interrelations among message groups in order to promptly identify the cause of a system error. This study has great significance in the following three aspects. First, the root cause of the error can be identified by collecting event logs of both system level and application service level and analyzing interrelations among the logs. Second, administrators do not need to classify messages for training since unsupervised learning of event log messages is applied. Third, using Dynamic Time Warping, an algorithm for measuring similarity of dynamic patterns over time increases accuracy of analysis on patterns generated from distributed system in which time synchronization is not exactly consistent.
4,000원
16.
2016.06 구독 인증기관 무료, 개인회원 유료
Quality function deployment (QFD) is a widely adopted customer-oriented product development methodology by translating customer requirements (CRs) into technical attributes (TAs), and subsequently into parts characteristics, process plans, and manufacturing operations. A main activity in QFD planning process is the determination of the target levels of TAs of a product so as to achieve a high level of customer satisfaction using the data or information included in the houses of quality (HoQ). Gathering the information or data for a HoQ may involve various inputs in the form of linguistic data which are inherently vague, or human perception, judgement and evaluation for the information and data. This research focuses on how to deal with this kind of impreciseness in QFD optimization. In this paper, it is assumed as more realistic situation that the values of TAs are taken as discrete, which means each TA has a few alternatives, as well as the customer satisfaction level acquired by each alternative of TAs and related cost are determined based on subjective or imprecise information and/or data. To handle these imprecise information and/or data, an approach using some basic definitions of fuzzy sets and the signed distance method for ranking fuzzy numbers is proposed. An example of a washing machine under two-segment market is provided for illustrating the proposed approach, and in this example, the difference between the optimal solution from the fuzzy model and that from the crisp model is compared as well as the advantage of using the fuzzy model is drawn.
4,300원
17.
2016.06 구독 인증기관 무료, 개인회원 유료
The quality of service in ERP services is a key issue in innovating the ERP performance as an IT service for corporations. Currently, most of the research works for this area are based on the existing quality models such as SERVQUAL, which are said not to measure service quality properly. Also, it does not consider the factor of the system acceptance, which is thought to be closely related to the quality of service in this type of IT services. Therefore, a research model is required, which can address the issues related to the ERP service quality measurements. In this paper, new research models have been proposed, and empirical data obtained from SME’s ERP users have been applied to the proposed model to analyze the causal relationship between quality factors and to identify the most suitable quality measures. Also, the relationship between the service quality and the system acceptance was analyzed using a combined model. The results show that factors such as productivity and convenience affects the quality mostly and that the most appropriate measure for the service quality of ERP service is satisfaction. Also, for the system acceptance, the ease-of-use is the main factor. And, the service quality and system acceptance shows a strong correlation. This means that, in order to enhance the quality of ERP services, the service provider needs to provide customers with productivity-enhanced user-friendly IT services and active product support. The work of this research can help to improve the service quality and the user acceptance of ERP services and to lead further research works in this area.
4,200원