간행물

한국산업경영시스템학회지 KCI 등재 Journal of Society of Korea Industrial and Systems Engineering

권호리스트/논문검색
이 간행물 논문 검색

권호

Vol. 40 No. 2 (2017년 6월) 17

1.
2017.06 구독 인증기관 무료, 개인회원 유료
The competition to acquire customers with repurchase intention is getting more and more intense in the saturated smartphone market. That is why the aim of this study is to investigate the difference in purchasing determinants between various age groups and gender. Furthermore we aimed to analyze which of the purchasing determinants influence repurchase intention. In order to gather information we conducted a survey on 252 respondents. We selected 5 purchasing determinants (A/S, H/W, switching cost, network effect and design) by conducting factor analysis and validity analysis. Then we analyzed how respondents of different age groups and gender perceived different purchasing determinants and analyzed the influence purchasing determinants have on repurchase intention. As for analyzing the difference of purchasing determinants throughout various age groups, the H/W, A/S, network effect showed a notable difference. The older respondents tended to consider H/W, A/S , network effect more importantly than younger respondents. When it came to comparing the results depending on the gender, male respondents tended to consider H/W factor more importantly compared to female respondents and the opposite was true for other factors such as A/S, switching cost, design, and network effect but there weren’t any significant differences. Finally, out of various determinant factors, network effect and design turned out to have a considerable influence on repurchase intention. That is why it would be highly appropriate to say that if a company intends to boost repurchase intention, enhancing network effect and developing an effective design would be crucial. This study differs from other existing researches in that we analyzed respondents from different age groups and gender separately. Also unlike other researches we directly analyzed the influence purchasing determinants have on repurchase intention not customer satisfaction. This study is expected to serve as basic information that can be used to establish strategies both in an academic and practical level, allowing companies to boost the repurchase intention of the smartphone market.
4,300원
2.
2017.06 구독 인증기관 무료, 개인회원 유료
Airline schedules are highly dependent on various factors of uncertainties such as unfavorable weather conditions, mechanical problems, natural disaster, airport congestion, and strikes. If the schedules are not properly managed to cope with such disturbances, the operational cost and performance are severely affected by the delays, cancelations, and so forth. This is described as a disruption. When the disruption occurs, the airline requires the feasible recovery plan returning to the normal operations in a timely manner so as to minimize the cost and impact of disruptions. In this research, an Ant Colony Optimization (ACO) algorithm with re-timing strategy is developed to solve the recovery problem for both aircraft and passenger. The problem consists of creating new aircraft routes and passenger itineraries to produce a feasible schedule during a recovery period. The suggested algorithm is based on an existing ACO algorithm that aims to reflect all the downstream effects by considering the passenger recovery cost as a part of the objective function value. This algorithm is complemented by re-timing strategy to effectively manage the disrupted passengers by allowing delays even on some of undisrupted flights. The delays no more than 15 minutes are accepted, which does not influence on the on-time performance of the airlines. The suggested method is tested on the real data sets from 2009 ROADEF Challenge, and the computational results are compared with the existing ones on the same data sets. The method generates the solution for most of problem set in 10 minutes, and the result generated by re-timing strategy is discussed for its impact.
4,000원
3.
2017.06 구독 인증기관 무료, 개인회원 유료
PL (Product Liability) refers to the legal responsibility of a manufacturer or seller for bodily injuries or property losses caused by product defects. Therefore, it is important for companies to construct a product liability response system that strategically manage and effectively adapt to product liability. A PL response system refers to companywide operations of PL prevention (PLP) measures, product safety (PS) measures, and PL defense (PLD) measures appropriate for a company’s scale and environment. To establish an enterprise product liability response system, each essential component of corporations should be systematically operated and maintained considering the scale and characteristics of the corporations. Essential components of PL response system is Strategy, Organization, Training, Technology, Investment, and Awareness. Role of essential components is that companies need specific strategies to secure product safety and protect customers from product defects, and appropriate organizations must be composed for effective operation of such strategies. The objective of this paper seeks to examine the relationships among the essential components of the product liability response system and PL performance. PL performance consists of positive performance and negative performance. In particular, positive performance include increased efforts in product or process innovation such as strengthening research and development (R&D) to produce safer products without defects. In order to carry out this research we obtained 98 questionnaire of manufacturing company. A summary of the analyses is as follows: First, the awareness and technology among essential components affect significantly to the positive performance. Second, the awareness and strategy among essential components negative affect to the negative performance.
4,000원
4.
2017.06 구독 인증기관 무료, 개인회원 유료
In this paper, we compared the efficiencies of national R&D investments between NT (Nanotechnology) areas in terms of papers, patents, and commercializations, and found ways to improve the efficiencies of national R&D investments for each NT area. This is in response to huge R&D investments government has made recently in NT areas.Here, we collected data on investments, papers, patents, and commercializations for the R&D projects in NT areas through National Science & Technology Information Service. Based on the data, we analyzed the investment and performances (papers, patents, and commercializations) for each NT area, calculated the efficiency for each NT area, and compared the efficiencies between NT areas. Next, using cluster analysis, we identified several NT areas with similar characteristics in terms of paper efficiency, patent efficiency and commercialization efficiency. Finally, we derived implications for the efficiency enhancement for each grouping.The cluster analysis showed that there could be two groups, one being low in terms of technological outcome (papers and patents) efficiencies and high in terms of commercialization efficiencies, while the other being high in terms of technological outcome (papers and patents) efficiencies and low in terms of commercialization efficiencies. Therefore, the strategy for one group calls for support for technology transfer or technology introduction from other R&D performers and grant of guidance for improving R&D performers’ commercialization ability to other R&D performers while the strategy for the other group calls for R&D support for transfer of technology to other R&D performers, activation of technology transfer and support for commercialization of R&D performers.
4,000원
5.
2017.06 구독 인증기관 무료, 개인회원 유료
The purpose of this paper is to improve the service tree analysis introduced recently by Geum et al. [15]. Service tree analysis structures the service based on the customer participation perspective and provides a qualitative analysis method categorizing the service elements on the basis of its impact to top service. This paper attempts to apply the concept of reliability importance to the service tree analysis as a perspective of quantitative analysis, which is considered little in Geum et al. [15]. Reliability importance is a measure of the structural impact of the components that make up the system on the system lifetime in reliability engineering field and often used in fault tree analysis. We transform the reliability importance into service importance in accordance with service tree analysis, so that the influence of service elements on the service can be judged and compared. The service importance is defined as the amount of change of the service according to the change of the service element, therefore, it can be utilized as an index for determining a service element for service improvement. In addition, as an index for paired service elements, the relationship between the two service components can be measured by joint service importance. This paper introduces conceptual changes in the process of applying reliability importance to service analysis, and shows how to use the service importance for identifying the priority of service element for the final service and improving customer satisfaction through an example. By using the service importance and joint service importance in service tree analysis, it is possible to make efficient decision making in the process of determining the service elements for analyzing and improving the service.
4,000원
6.
2017.06 구독 인증기관 무료, 개인회원 유료
Due to the rapid change of global business environment, the growth of China’s steel industry and the inflow of cheap products, domestic steel industry is faced on downward trend. The change of business paradigms from a quantitative growth to a qualitative product is needed in this steel industry. In this environment, it is very important for domestic steel distribution companies to secure their competitiveness by selecting good supply companies through a efficient procurement strategy and effective method. This study tried to find out the success factors of steel distribution industry based on survey research from experts. Weighted values of each factors were found by using AHP (analytic hierarchy process) analysis. The weighted values were applied to DEA(data envelopment analysis) model and eventually the best steel supply company were selected. This paper used 29 domestic steel distribution firms for case example and 5 steps of decision process to select good vendors were suggested.This study used quality, price, delivery and finance as a selection criteria. Using this four criterions, nine variable were suggested. Which were product diversity, base price, discount, payment position, average delivery date, urgency order responsibility and financial condition. These variables were used as a output variable of DEA. Sales and facilities were used as an input variable. Pairwise comparison was conducted using these variables. The weighted value calculated by AHP pairwise comparison were used for DEA analysis. Through the analysis of DEA efficiency process, good DMU (decision making unit) were recommended as a steel supply company. The domestic case example was used to show the effectiveness of this study.
4,000원
7.
2017.06 구독 인증기관 무료, 개인회원 유료
A baggage handling system (BHS) in airport is an unified system for moving the passengers’ baggage in designated time. Input baggage from the check-in counter travels to the baggage claim area or cargo handling terminal through this system. In particular, entryway BHS consists of conveyors, X-ray and sorters such as tilt-tray to send the baggage to departing airplane and it could have various problems for handling certain amount of baggage in restricted time such as baggage jamming at certain merge point. This causes systemic error such as delay of the time, omissions of the baggage and even breakdown of the equipment and inefficiency. Also the increasing maximum time of the baggage passing through the BHS could delay the flight schedule and finally decrease the service level. Thus, the algorithm for controlling the flow of the merge is essential to the system. The window reservation algorithm is the one of the most frequently used methods to control the merge configuration. Above all, the reserve location, so called reserve ahead point, that allocates the window is important for the performance of the algorithm. We propose the modified window reservation algorithm and the best reserve locations by changing the reserve ahead point in the induction conveyors. The effect of various reserve ahead points such as the capacity and utility of the system were analyzed and the most effective reserve ahead point combination was found. The total baggage processing time and the utilization of the tilt-tray are properly considered when choosing the optimal Reserve ahead point combination. In the layout of this study, the configuration of four conveyors merged into one tilt-tray is studied and simulation analysis is done by AutoMod®, 3D simulation software. Through the simulation, the window reservation algorithm is effectively operated with the best combination of reserve ahead point which reduces the maximum baggage travel time.
4,000원
8.
2017.06 구독 인증기관 무료, 개인회원 유료
In order to implement the smart home environment, we need an intelligence service platform that learns the user’s life style and behavioral patterns, and recommends appropriate services to the user. The intelligence service platform should embed a couple of effective and efficient data mining algorithms for learning from the data that is gathered from the smart home environment. In this study, we evaluate the suitability of data mining algorithms for smart home intelligent service platforms. In order to do this, we first develop an intelligent service scenario for smart home environment, which is utilized to derive functional and technical requirements for data mining algorithms that is equipped in the smart home intelligent service platform. We then evaluate the suitability of several data mining algorithms by employing the analytic hierarchy process technique. Applying the analytical hierarchy process technique, we first score the importance of functional and technical requirements through a hierarchical structure of pairwise comparisons made by experts, and then assess the suitability of data mining algorithms for each functional and technical requirements. There are several studies for smart home service and platforms, but most of the study have focused on a certain smart home service or a certain service platform implementation. In this study, we focus on the general requirements and suitability of data mining algorithms themselves that are equipped in smart home intelligent service platform. As a result, we provide a general guideline to choose appropriate data mining techniques when building a smart home intelligent service platform.
4,000원
9.
2017.06 구독 인증기관 무료, 개인회원 유료
In this study, we conducted a comparative study on user’s perception and behavior on public system service (PSS) using institutionalism theory and MGA (multi-group analysis) methodology. In particular, this study focuses on how institutional isomorphism is applied to public system services and how MGA can be implemented correctly in a variance based SEM (structural equation model) such as PLS (partial least square). A data set of 496 effective responses was collected from pubic system users and an empirical research was conducted using three segmented models categorized by public proximity theory (public firms = 113, government contractors = 210, private contractors = 173). For rigorous group comparisons, each model was estimated by the same indicators and approaches. PLS-SEM was used in testing research hypotheses, followed by parametric and non-parametric PLS-MGA procedures in testing categorical moderation effects. This study applied novel procedures for testing composite measurement invariance prior to multi-group comparisons. The following main results and implications are drawn : 1) Partial measurement invariance was established. Multi-group analysis can be done by decomposed models although data can not be pooled for one integrated model. 2) Multi-group analysis using various approaches showed that proximity to public sphere moderated some hypothesized paths from quality dimensions to user satisfaction, which means that categorical moderating effects were partially supported. 3) Careful attention should be given to the selection of statistical test methods and the interpretation of the results of multi-group analysis, taking into account the different outcomes of the PLS-MGA test methods and the low statistical power of the moderating effect. It is necessary to use various methods such as comparing the difference in the path coefficient significance and the significance of the path coefficient difference between the groups. 4) Substantial differences in the perceptions and behaviors of PSS users existed according to proximity to public sphere, including the significance of path coefficients, mediation and categorical moderation effects. 5) The paper also provides detailed analysis and implication from a new institutional perspective. This study using a novel and appropriate methodology for performing group comparisons would be useful for researchers interested in comparative studies employing institutionalism theory and PLS-SEM multi-group analysis technique.
4,600원
10.
2017.06 구독 인증기관 무료, 개인회원 유료
Data clustering determines a group of patterns using similarity measure in a dataset and is one of the most important and difficult technique in data mining. Clustering can be formally considered as a particular kind of NP-hard grouping problem. K-means algorithm which is popular and efficient, is sensitive for initialization and has the possibility to be stuck in local optimum because of hill climbing clustering method. This method is also not computationally feasible in practice, especially for large datasets and large number of clusters. Therefore, we need a robust and efficient clustering algorithm to find the global optimum (not local optimum) especially when much data is collected from many IoT (Internet of Things) devices in these days. The objective of this paper is to propose new Hybrid Simulated Annealing (HSA) which is combined simulated annealing with K-means for non-hierarchical clustering of big data. Simulated annealing (SA) is useful for diversified search in large search space and K-means is useful for converged search in predetermined search space. Our proposed method can balance the intensification and diversification to find the global optimal solution in big data clustering. The performance of HSA is validated using Iris, Wine, Glass, and Vowel UCI machine learning repository datasets comparing to previous studies by experiment and analysis. Our proposed KSAK (K-means+SA+K-means) and SAK (SA+K-means) are better than KSA(K-means+SA), SA, and K-means in our simulations. Our method has significantly improved accuracy and efficiency to find the global optimal data clustering solution for complex, real time, and costly data mining process.
4,000원
11.
2017.06 구독 인증기관 무료, 개인회원 유료
Productivity improvement is one of the important goals which firms’ technology developments aim at. Firms’ improved productivity from technology development means that their inputs can produce more outputs through technology development, which makes firms’ productivity improvement from technology development more and more important in the age of technology advance and convergence like today. This research empirically analyzes the influence of the external technology collaboration network diversity on the productivity improvement of the small and medium-sized enterprises (SMEs) from technology development and the moderating effect of the chief technology officer (CTO)-driven technology development on this influence. This study constructs the research model reflecting the moderating impact of the CTO-driven technology development and tests it with the ordinary least squares regression through the IBM SPSS version 23 by using the 2,000 data about South Korean SMEs. This research empirically reveals two points. One is that SMEs’ external technology collaboration network diversity has a positive influence on their productivity improvement from technology development. The other is that the positive effect of SMEs’ external technology collaboration network diversity on their productivity improvement from technology development is moderated by the CTO-driven technology development. The two points revealed in this study present two meaningful implications in not only the practical but also academic point of view. The practical implication is that it is effective for SMEs to use CTOs in increasing their productivity improvement from technology development. The academic implication is that making technology collaboration with more diverse external partners can increase SMEs’ productivity improvement from technology development.
4,000원
12.
2017.06 구독 인증기관 무료, 개인회원 유료
When evaluating effectiveness of a R&D program, there is a tendency to simply compare the performances of the beneficiaries before and after the program or to compare the differences in the performances of the beneficiaries and the non-beneficiaries before-after the program. However, these ways of evaluating effectiveness of a program have some problems because they can not differentiate between complement effect, which facilitates corporate R&D investment, and substitute effect, which, literally, substitutes corporate R&D investment. Therefore, these problems could bring about wrong policies concerning R&D programs.In this paper, a new approach using path analysis is suggested as a means to overcome these problems and it is utilized, as an application, to evaluate the effectiveness of Plant Engineering Program conducted by Ministry of Trade, Industry & Energy, Korea. First, the direct impact of government R&D investment on corporate R&D investment is analyzed, through which it is identified which of crowding-in effect (complement effect) and crowding-out effect (substitute effect) is dominant. Next, the direct effect of government R&D investment on corporate performance and the direct effect of corporate R&D investment on corporate performance is analyzed respectively. Finally, by combining the two previous analysis, the total effect of government R&D investment on corporate performance is identified.As a result, it turns out that, in Plant Engineering Program, crowding-in effect is more dominant than crowding-out effect and that Plant Engineering Program has definitely positive effect on the beneficiary in terms of corporate performance indirectly and directly.
4,000원
13.
2017.06 구독 인증기관 무료, 개인회원 유료
This paper proposes an improved standard genetic algorithm (GA) of making a near optimal schedule for integrated process planning and scheduling problem (IPPS) considering tool flexibility and tool related constraints. Process planning involves the selection of operations and the allocation of resources. Scheduling, meanwhile, determines the sequence order in which operations are executed on each machine. Due to the high degree of complexity, traditionally, a sequential approach has been preferred, which determines process planning firstly and then performs scheduling independently based on the results. The two sub-problems, however, are complicatedly interrelated to each other, so the IPPS tend to solve the two problems simultaneously. Although many studies for IPPS have been conducted in the past, tool flexibility and capacity constraints are rarely considered. Various meta-heuristics, especially GA, have been applied for IPPS, but the performance is yet satisfactory. To improve solution quality against computation time in GA, we adopted three methods. First, we used a random circular queue during generation of an initial population. It can provide sufficient diversity of individuals at the beginning of GA. Second, we adopted an inferior selection to choose the parents for the crossover and mutation operations. It helps to maintain exploitation capability throughout the evolution process. Third, we employed a modification of the hybrid scheduling algorithm to decode the chromosome of the individual into a schedule, which can generate an active and non-delay schedule. The experimental results show that our proposed algorithm is superior to the current best evolutionary algorithms at most benchmark problems.
4,000원
14.
2017.06 구독 인증기관 무료, 개인회원 유료
This paper was to evaluate social enterprises’ management efficiency with Data Envelope Analysis (DEA). The data was based on the 168 social enterprises’ of annual performance reports published in 2015. The research focused on to measure both financial efficiency and social impact of the companies simultaneously. To apply DEA, the paper classified the enterprises into seven types based on types of socal impacts which each company provides before the estimation of the efficiency. The research results showed that group D, which employes disadvantaged people, provides social services and shares resources was the most efficient group and had higest net worths in Pure Technical Efficiency. In contrast, Group B, which only employs social advantage people and provides social service, was the least efficient one. The research suggests a practical and efficient framework in measuring social enterprises’ management efficiency, including both the financial performance and social impacts simultaneously with their self-publishing reports. Because the Korea Social Enterprise Promotion Agency does not open business reports which social enterprises submit each year, there are basic limitations on researchers attempting to analyse with data from all social enterprises in Korea. Thus, this study dealt with only 10% of the social enterprises which self-published their performance report on the Korea Social Enterprise Promotion Agency’s web site. Regardless of these limitations, this study suggested substantial methods to estimate management efficiency with the self-published reports. Because self-publishing is increasing each year, it will be the main source of information for researchers in examining and evaluating social enterprises’ financial performance or social contribution. The research suggests a practical and efficient framework in measuring social enterprises’ management efficiency, including both the financial performance and social impacts simultaneously with their self-publishing reports. The research results suggest not only list of efficient enterprises but also methods of improvement for less efficient enterprises.
4,000원
15.
2017.06 구독 인증기관 무료, 개인회원 유료
Among the information technology and automation that have rapidly developed in the manufacturing industries recently, tens of thousands of quality variables are estimated and categorized in database every day. The former existing statistical methods, or variable selection and interpretation by experts, place limits on proper judgment. Accordingly, various data mining methods, including decision tree analysis, have been developed in recent years. Cart and C5.0 are representative algorithms for decision tree analysis, but these algorithms have limits in defining the tolerance of continuous explanatory variables. Also, target variables are restricted by the information that indicates only the quality of the products like the rate of defective products. Therefore it is essential to develop an algorithm that improves upon Cart and C5.0 and allows access to new quality information such as loss cost. In this study, a new algorithm was developed not only to find the major variables which minimize the target variable, loss cost, but also to overcome the limits of Cart and C5.0. The new algorithm is one that defines tolerance of variables systematically by adopting 3 categories of the continuous explanatory variables. The characteristics of larger-the-better was presumed in the environment of programming R to compare the performance among the new algorithm and existing ones, and 10 simulations were performed with 1,000 data sets for each variable. The performance of the new algorithm was verified through a mean test of loss cost. As a result of the verification show, the new algorithm found that the tolerance of continuous explanatory variables lowered loss cost more than existing ones in the larger is better characteristics. In a conclusion, the new algorithm could be used to find the tolerance of continuous explanatory variables to minimize the loss in the process taking into account the loss cost of the products.1)
4,000원
16.
2017.06 구독 인증기관 무료, 개인회원 유료
The business process of global third party logistics company is defined as a network of logistics activities which involves the products that are manufactured in the developing countries, such as Vietnam, China and so on, and delivered to North or South American countries via intermediate stopover sites. The third party logistics company usually uses proprietary logistics information system to support the related logistics activities. However, each consignor sometimes may require different business process based on the customer type or characteristics of their products. Therefore, the third party logistics company need to modify their business process to reflect customer’s requirements, resulting in the modification of logistic information systems and additional costs. Therefore, a flexible mechanism is required to efficiently support the various types of requirements by the owners of the products. In this paper, first, we figured out various business rules related to third party global logistics activities. Second, we grouped the identified business rules into business processes, objects, relations, dependency, policy, representations, execution, and resources and further into precondition, postcondition, and invariant based on checking point in time. Furthermore, the categorized rules are classified into inter-activity and intra-activity rules based on the execution range. Third, we proposed a rule syntax to describe the defined rules into scripts which are understood by user and information system together. When each activity is executed, the rule manager checks whether there are rules related with the activity execution. Finally, we developed a prototype rule management system to show the feasibility of our proposed methodology and to validate it with an example.
4,000원
17.
2017.06 구독 인증기관 무료, 개인회원 유료
One of the challenges facing precision manufacturers is the increasing feature complexity of tight tolerance parts. All engineering drawings must account for the size, form, orientation, and location of all features to ensure manufacturability, measurability, and design intent. Geometric controls per ASME Y14.5 are typically applied to specify dimensional tolerances on engineering drawings and define size, form, orientation, and location of features. Many engineering drawings lack the necessary geometric dimensioning and tolerancing to allow for timely and accurate inspection and verification. Plus-minus tolerancing is typically ambiguous and requires extra time by engineering, programming, machining, and inspection functions to debate and agree on a single conclusion. Complex geometry can result in long inspection and verification times and put even the most sophisticated measurement equipment and processes to the test. In addition, design, manufacturing and quality engineers are often frustrated by communication errors over these features. However, an approach called profile tolerancing offers optimal definition of design intent by explicitly defining uniform boundaries around the physical geometry. It is an efficient and effective method for measurement and quality control. There are several advantages for product designers who use position and profile tolerancing instead of linear dimensioning. When design intent is conveyed unambiguously, manufacturers don’t have to field multiple question from suppliers as they design and build a process for manufacturing and inspection. Profile tolerancing, when it is applied correctly, provides manufacturing and inspection functions with unambiguously defined tolerancing. Those data are manufacturable and measurable. Customers can see cost and lead time reductions with parts that consistently meet the design intent. Components can function properly-eliminating costly rework, redesign, and missed market opportunities. However a supplier that is poised to embrace profile tolerancing will no doubt run into resistance from those who would prefer the way things have always been done. It is not just internal naysayers, but also suppliers that might fight the change. In addition, the investment for suppliers can be steep in terms of training, equipment, and software.
4,000원