K-means algorithm is one of the most popular and widely used clustering method because it is easy to implement and very efficient. However, this method has the limitation to be used with fixed number of clusters because of only considering the intra-cluster distance to evaluate the data clustering solutions. Silhouette is useful and stable valid index to decide the data clustering solution with number of clusters to consider the intra and inter cluster distance for unsupervised data. However, this valid index has high computational burden because of considering quality measure for each data object. The objective of this paper is to propose the fast and simple speed-up method to overcome this limitation to use silhouette for the effective large-scale data clustering. In the first step, the proposed method calculates and saves the distance for each data once. In the second step, this distance matrix is used to calculate the relative distance rate (Vj) of each data j and this rate is used to choose the suitable number of clusters without much computation time. In the third step, the proposed efficient heuristic algorithm (Group search optimization, GSO, in this paper) can search the global optimum with saving computational capacity with good initial solutions using Vj probabilistically for the data clustering. The performance of our proposed method is validated to save significantly computation time against the original silhouette only using Ruspini, Iris, Wine and Breast cancer in UCI machine learning repository datasets by experiment and analysis. Especially, the performance of our proposed method is much better than previous method for the larger size of data.
This study is intended to investigate that it is possible to analyze the public awareness and satisfaction of the weather forecast service provided by the Korea Meteorological Administration (KMA) through social media data as a way to overcome limitations of the questionnaire-based survey in the previous research. Sentiment analysis and association rule mining were used for Twitter data containing opinions about the weather forecast service. As a result of sentiment analysis, the frequency of negative opinions was very high, about 75%, relative to positive opinions because of the nature of public services. The detailed analysis shows that a large portion of users are dissatisfied with precipitation forecast and that it is needed to analyze the two kinds of error types of the precipitation forecast, namely, ‘False alarm’ and ‘Miss’ in more detail. Therefore, association rule mining was performed on negative tweets for each of these error types. As a result, it was found that a considerable number of complaints occurred when preventive actions were useless because the forecast predicting rain had a ‘False alarm’ error. In addition, this study found that people’s dissatisfaction increased when they experienced inconveniences due to either unpredictable high winds and heavy rains in summer or severe cold in winter, which were missed by weather forecast. This study suggests that the analysis of social media data can provide detailed information about forecast users’ opinion in almost real time, which is impossible through survey or interview.
Postural instability can increase the likelihood of hazardous slip and fall accidents in workplaces. The present study intended to extend understanding of the effect of abnormal neck posture on postural control during quiet standing. The effect of body fatigue on the postural control was also of primary concern. Twelve healthy undergraduate students volunteered to participate in the experiment. Standing on a force platform with the neck neutral, flexed, extended, or rotated, subjects’ center of pressures (COP) were measured under the two levels of body fatigue. For the fatigue condition, Subjects exercised in a treadmill to meet the predetermined level of body fatigue. Analyzing the position coordinates of COPs, the length of postural sway path was assessed in both medio-lateral (ML) axis and anterior-posterior (AP) axis. Results showed that, in AP direction, neck extension or rotation significantly increased the sway length as compared with neck neutral. Neck extension led to greater sway length compared to neck rotation. Neck flexion did not differ from neck neutral. The sway length in the AP direction also became significantly larger as the body fatigue accumulated after treadmill exercise. In ML direction, as compared to neutral posture, the neck extension, flexion, or rotation did not significantly affect the length of postural sway path. However, the sway length seemed to increase marginally with the neck extended during the fatigued condition. This study demonstrates that abnormal neck posture may interfere with postural control during standing. The ability to maintain postural stability decreases significantly with the neck extended or rotated. Body fatigue leads to postural instability further.
As a system complexity increases and technology innovation progresses rapidly, leasing the equipment is considered as an important issue in many engineering areas. In practice, many engineering fields lease the equipment because it is an economical way to lease the equipment rather than to own the equipment. In addition, as the maintenance actions for the equipment are costly and need a specialist, the lessor is responsible for the maintenance actions in most leased contract. Hence, the lessor should establish the optimal maintenance strategy to minimize the maintenance cost. This paper proposes two periodic preventive maintenance policies for the leased equipment. The preventive maintenance action of policy 1 is performed with a periodic interval, in which their intervals are the same until the end of lease period. The other policy is to determine the periodic preventive maintenance interval minimizing total maintenance cost during the lease period. In addition, this paper presents two decision-making models to determine the preventive maintenance strategy for leased equipment based on the lessor’s preference between the maintenance cost and the reliability at the end of lease period. The structural properties of the proposed decision-making model are investigated and algorithms to search the optimal maintenance policy that are satisfied by the lessor are provided. A numerical example is provided to illustrate the proposed model. The results show that a maintenance policy minimizing the maintenance cost is selected as a reasonable decision as the lease term becomes shorter. Moreover, the frequent preventive maintenance actions are performed when the minimal repair cost is higher than the preventive maintenance cost, resulting in higher maintenance cost.
Research in the Servitization of Manufacturing has become focused more on cases than concepts over the past decade. This is because governments have emphasized some practical research with policies to support their industries. Manufacturers need differentiated strategies to gain competitiveness by servitization in the global market. They should examine how common servitization has become in the same industries and markets. They should also make sure how it works and for what purpose it is done since it is necessary to make the best decision to be able to distribute the limited resources most effectively to defeat the global competitors. South Korea has the sixth largest trade volume in the world, but Korean SMEs’ marketing capabilities fall short compared to that of major global companies. This paper seeks to develop the proper model and its application for the servitization with global cases which are recommended for Korean SMEs. They need to check the urgency in servitization according to their products, industries and target markets. In addition, factors such as purposes, time and types of the servitization are examined to see how they are related each other. The most significant implication of this study is that the processes for early-stage companies in servitazation are modeled to help them make the best choices.
The Korean Ministry of Education (MOE) required cash flow statements, instead of fund statements, as a component of the financial statements of the Industry-Academic Cooperation Foundations (IACF) when it revised IACF accounting rules in 2012.
While the fund statements were aligned its accounts to the operating statements’ accounts level, the newly adopted cash flow statements’ accounts retreated to two levels higher than those of the operating statements. It may be resulted from the consideration that IACFs’ burden of preparing cash flow statements especially in direct method. To help IACFs prepare the cash flow statements, MOE's guidance on IACF accounting rules introduced cash flow statement preparation method and presented a practical example, but it did not check whether the accounts of the cash flow statement were reconciled to the accounts of corresponding balance sheet and operating statement. That means that the guidance still lacks how to assure it was accurately prepared.
Our study proposed cash flow statement preparation using reconciliation method which reconciles cash flow statement accounts’ amounts to balance sheet and operating statement accounts’ amounts with integrity checking and also provided a practical example by using the same case in MOE’s guidance on IACF accounting rules to help IACF accounting personnel to prepare a cash flow statement efficiently.
In recent years, imbalanced data is one of the most important and frequent issue for quality control in industrial field. As an example, defect rate has been drastically reduced thanks to highly developed technology and quality management, so that only few defective data can be obtained from production process. Therefore, quality classification should be performed under the condition that one class (defective dataset) is even smaller than the other class (good dataset). However, traditional multi-class classification methods are not appropriate to deal with such an imbalanced dataset, since they classify data from the difference between one class and the others that can hardly be found in imbalanced datasets. Thus, one-class classification that thoroughly learns patterns of target class is more suitable for imbalanced dataset since it only focuses on data in a target class. So far, several one-class classification methods such as one-class support vector machine, neural network and decision tree there have been suggested. One-class support vector machine and neural network can guarantee good classification rate, and decision tree can provide a set of rules that can be clearly interpreted. However, the classifiers obtained from the former two methods consist of complex mathematical functions and cannot be easily understood by users. In case of decision tree, the criterion for rule generation is ambiguous. Therefore, as an alternative, a new one-class classifier using hyper-rectangles was proposed, which performs precise classification compared to other methods and generates rules clearly understood by users as well. In this paper, we suggest an approach for improving the limitations of those previous one-class classification algorithms. Specifically, the suggested approach produces more improved one-class classifier using hyper-rectangles generated by using Gaussian function. The performance of the suggested algorithm is verified by a numerical experiment, which uses several datasets in UCI machine learning repository.
The dynamic capabilities of sensing market signals, creating new opportunities and reconfiguring resources and capabilities to new opportunities in a rapidly changing economic environment determines the competitiveness of the enterprise to create added value and survival. This study conceptualized a two-stage performance measurement framework based on the casual model of resource (input)-process-performance (output). We have developed a ‘Process capability index’ that reflect the dynamic capabilities factors as a key intermediary product linking resource inputs and performance outputs in enterprise performance measurement. The process capability index consists of four elements : manpower (level of human resource), operation productivity, structure and risk management. The DEA (Data Envelopment Analysis) model was applied to the developed performance indicators to analyze the branch office performance of a telecom company. Process capability efficiency (stage 1) uses resource inputs to reach a certain level of process capabilities. In performance result efficiency (stage 2), the process capabilities are used to generate sales revenues and subscribers. The two-stage DEA model derives intermediate output values that optimize the individual stages simultaneously. Some branch offices in the telecom company have focused on process capability efficiency or some other branch offices focused on performance result efficiency. Positioning map using two-stage efficiency decomposition and benchmarking can help identify the sources of inefficiencies and visualize strategic directions for performance optimization. Applications of two-stage DEA in conjunction with the case study that are meaningfully used in performance measurement areas have been scarce. In particular, this paper has the contribution to present a new performance measurement model considering the organization theory, the dynamic capabilities.
It is very crucial activities that Korean army have to detect and recognize enemy’s locations and types of weapon of their artillery firstly for effective operation of friendly force’s artillery weapons during wartime. For these activities, one of the most critical artillery weapon systems is the anti-artillery radar (hereafter; radars) for immediate counter-fire operations against the target. So, in early wartime these radar’s roles are very important for minimizing friendly force’s damage because arbiters have to recognize a several enemy’s artillery positions quickly and then to take an action right away. Up to date, Republic of Korea Army for tactical artillery operations only depends on individual commander’s intuition and capability. Therefore, we propose these radars allocation model based on integer programming that combines ArcGIS (Geographic Information System) analysis data and each radar’s performances which include allowable specific ranges of altitude, azimuth (FOV; field of view) and distances for target detection, and weapons types i.e., rocket, mortars and cannon ammo etc. And we demonstrate the effectiveness of their allocation’s solution of available various types of radar asset through several experimental scenarios. The proposed model can be ensured the optimal detection coverage, the enhancement of artillery radar’s operations and assisting a quick decision for commander finally.
In recent years, the diminishing of operation and maintenance cost using advanced maintenance technology is attracting many companies’ attention. Especially, the heavy machinery industry regards it as a crucial problem since a failure of heavy machinery requires high cost and long downtime. To improve the current maintenance process, the heavy machinery industry tries to develop a methodology to predict failure in advance and to find its causes using usage data. A better analysis of failure causes requires more data so that various kinds of sensor are attached to machines and abundant amount of product usage data is collected through the sensor network. However, the systemic analysis of the collected product usage data is still in its infant stage. Many previous works have focused on failure occurrence as statistical data for reliability analysis. There have been less works to apply product usage data into root cause analysis of product failure. The product usage data collected while failures occur should be considered failure cause analysis. To do this, this study proposes a methodology to apply product usage data into failure cause analysis. The proposed methodology in this study is composed of several steps to transform product usage into failure causes. Various statistical analysis combined with product usage data such as multinomial logistic regression, T-test, and so on are used for the root cause analysis. The proposed methodology is applied to field data coming from operated locomotive and the analysis result shows its effectiveness.
Using network betweenness centrality we attempt to analyze the characteristics of Seoul metropolitan subway lines. Betweenness centrality highlights the importance of a node as a transfer point between any pairs of nodes. This ‘transfer’ characteristic is obviously of paramount importance in transit systems. For betweenness centrality, both traditional betweenness centrality measure and weighted betweenness centrality measure which uses monthly passenger flow amount between two stations are used. By comparing traditional and weighted betweenness centrality measures of lines characteristics of passenger flow can be identified. We also investigated factors which affect betweenness centrality. It is the number of passenger who get on or get off that significantly affects betweenness centrality measures. Through correlation analysis of the number of passenger and betweenness centrality, it is found out that Seoul metropolitan subway system is well designed in terms of regional distribution of population. Four measures are proposed which represent the passenger flow characteristics. It is shown they do not follow Power-law distribution, which means passenger flow is relatively evenly distributed among stations. It has been shown that the passenger flow characteristics of subway networks in other foreign cities such as Beijing, Boston and San Franciso do follow power-law distribution, that is, pretty much biased passenger flow traffic characteristics. In this study we have also tried to answer why passenger traffic flow of Seoul metropolitan subway network is more homogeneous compared to that of Beijing.
In this study, we consider the assembly line balancing (ALB) problem which is known as an very important decision dealing with the optimal design of assembly lines. We consider ALB problems with soft constraints which are expected to be fulfilled, however they are not necessarily to be satisfied always and they are difficult to be presented in exact quantitative forms. In previous studies, most researches have dealt with hard constraints which should be satisfied at all time in ALB problems. In this study, we modify the mixed integer programming model of the problem introduced in the existing study where the problem was first considered. Based on the modified model, we propose a new algorithm using the genetic algorithm (GA). In the algorithm, new features like, a mixed initial population selection method composed of the random selection method and the elite solutions of the simple ALB problem, a fitness evaluation method based on achievement ratio are applied. In addition, we select the genetic operators and parameters which are appropriate for the soft assignment constraints through the preliminary tests. From the results of the computational experiments, it is shown that the proposed algorithm generated the solutions with the high achievement ratio of the soft constraints.
There always exist nonzero inspection errors whether inspectors are humans or automatic inspection machines. Inspection errors can be categorized by two types, type I error and type II error, and they can be regarded as either a constant or a random variable. Under the assumption that two types of random inspection errors are distributed with the “uniform” distribution on a half-open interval starting from zero, it was proved that inspectors overestimate any given fraction defective with the probability more than 50%, if and only if the given fraction defective is smaller than a critical value, which depends upon only the ratio of a type II error over a type I error. In addition, it was also proved that the probability of overestimation approaches one hundred percent as a given fraction defective approaches zero. If these critical phenomena hold true for any error distribution, then it might have great economic impact on commercial inspection plans due to the unfair overestimation and the recent trend of decreasing fraction defectives in industry. In this paper, we deal with the same overestimation problem, but assume a “symmetrical triangular” distribution expecting better results since our triangular distribution is closer to a normal distribution than the uniform distribution. It turns out that the overestimation phenomenon still holds true even for the triangular error distribution.
In this study, a correlation between execution of quality management activities and their results was verified by applying the Malcolm Baldrige model (hereafter referred to as the MB model) as a quality management performance measurement indicator for small and medium enterprises (SMEs) in South Korea. To achieve this goal, we need to determine whether the categorical requirements in the MB model are recognized consistently in SMEs, as a prerequisite. To this end, factor analysis was conducted for measurement variables in each category, which revealed that the process indicator was made up of six factors and the outcome indicator was made up of five factors, like those configured in the MB model. This result can be interpreted to mean that the requirements in each category of the MB model were well produced and recognized consistently throughout SMEs in South Korea. In addition, the analysis of causality between the process indicator (quality management activities) and the outcome indicator (management results) showed high causality between them. Although the quality management levels of SMEs in South Korea are inferior to those of conglomerates or other national quality award-winning companies, this study is significant in that the causality between quality management activities and results was verified, since this study targeted SMEs in South Korea as the target of investigation. Thus, it is empirically proven that the MB model can contribute to improved management results for SMEs in Korea.
Although the 1997 Raw Data of the National Wealth Statistical Survey (South Korea) is timely 20-year-old data, it is meaningful as a benchmarking in the capital stock estimations of Korea, which is estimated by PIM (permanent inventory method). In the case of machinery/equipment, it is the data that can analyze in depth the changes in the industrial structure of Korea. In the case of ASL (average service life) which has economic concept, since the change of ASL is not so large, ASL yielded by the Raw Data of the 1997 National Wealth Statistical Survey is meaningful as reference value for the ASL estimated by the Bank of Korea and the National Statistical Office. As you know Japan has changed its service life due to changes in its industrial structure. However, many of its assets are still used for the years indicated in Showa (before 1989). The same trend with other countries such as Japan. However, the United States is constantly devoted to assessing the useful ASL and value of assets by distinguishing between the Hulten-Wykoff models and those not. Korea has also benchmarked the useful ASL of the United States and Japan when it conducted its own survey every 10 years by due diligence until 1997.
In this study, the ‘constraint’ Iowa curve estimation by the Raw Data of the 1997 National Wealth Statistical Survey is based on the age records of the assets and the maximum age of the assets appropriately derived. And then we made modified Iowa curve by smoothing. From this modified one, we suggested ASL by asset.
After 1997, the vintage disposal data directly were collected by the National Statistical Office with Oh Hyun Seung, Cho Jin Hyung, in order to estimate the useful ASL. Since then, the B/S team of the Economic and Statistics Bureau of the Bank of Korea has been working on a new concept of content training.
At the Bank of Korea, capital stock statistics were created by the PIM (perpetual inventory method) with fixed capital formation data. Asset classifications also included 2 categories in residential buildings, 4 non-residential buildings, 14 constructions, 9 transportation equipment, 28 machinery, and 2 intangible fixed assets. It is the Korean government accounting system which is developed much with the field of the national accounts including the valuation, but until 2008 it was consistent with single-entry bookkeeping. Many countries, including Korea, were single-entry bookkeeping, not double-entry bookkeeping which can be aggregated by government accounting standard account. There was no distinction in journaling between revenue and capital expenditure when it was consistent with single-entry bookkeeping. For example, we would like to appropriately divide the past budget accounts and the settlement accounts data that have been spent on dredging into capital expenditure and revenue expenditure. It, then, tries to add the capital expenditure calculated to FCF (fixed capital formation), because revenue expenditure is cost for maintenance etc.
This could be a new direction, especially, in the estimation of capital stock by the perpetual inventory method for infrastructure (SOC, social overhead capital). It should also be noted that there are differences not only between capital and income expenditure but also by other factors. How long will this difference be covered by the difference between the ‘new series’ and ‘old series’ methodologies? In addition, there is no large difference between two series by the major asset classification level. If this is treated as a round-off error, this is a problem.
The proliferation of information technologies made it possible to produce information products of different versions at much lower cost comparing to traditional physical products. Thus it is common for information product manufacturers to consider vertically differentiated product line for more profit through improved market coverage. Another salient characteristic of most information product is network externality. Existing researches dealing with vertical differentiation and network externality usually assumed oligopolistic market where vertically differentiated products are provided by competing companies, respectively. Moreover, they analyzed the essentially dynamic characteristic of network externality statically. In this study, different from the previous researches, the vertical differentiation strategy of a monopolistic company under network externality is dynamically analyzed. We used a two-period model to accommodate the dynamic feature of network externality. Based on the two-period model, the profit maximizing solutions are analyzed. The results showed that a monopolistic company has no incentive to differentiate products vertically when the network externality is absent. On the contrary, when the network externality exists, the monopolistic company can derive more profit by vertically differentiating the product line. It is also shown that, for more profit, the monopolistic company should keep the quality difference between the high quality product and the low quality product as greater as possible.
Each municipality is promoting signboard improvement projects, but the effectiveness is not great. The main reason for this is that when the signboard is manufactured the location of the shop is only dependent on the visual aspiration effect and it is not harmonized with the building by applying uniform design.
The signboard are installed almost everywhere in the city, such as residential, natural and commercial environments. And the signboard provides various services such as traffic information and life safety to customers. Also, by applying design and functional elements, we can enhance the competitiveness of companies and improve the urban environment by enhancing the mutual connection effect of customers and space.
The purpose of this study is to investigate the effect of design elements and functional elements of the signboard on customer preference. In other words, we conducted surveys and analyzes of 152 specialists and non-experts (customers, self-employed).
As a result of the study, the following conclusions were drawn.
First, the functions and design elements of signboards are important factors that have a positive effect on customer preference. Second, the function of the signboard has a positive(+) influence on the customer preference, and the information providing function and the corporate image side function directly affect the customer preference. Third, color elements have a positive(+) meaningful effect on customer preference. Especially, similar color sign affects customer preference positively. Fourth, the signboard design component has a positive(+) significant effect on customer preference. In particular, typefaces have a positive effect on customer preference.
In the industrial fields, the process capability index has been using to evaluate the variation of quality in the process. The traditional process capability indices such as Cp, Cpk, Cpm, and C┼pm have been applied in the industrial fields. These traditional process capability indices are mainly applied in the univariate analysis. However, the main streams in the recent industry are the multivariate manufacturing process and the multiple quality characteristics are corrected each other. Therefore, the multivariate statistical method should be used in the process capability analysis. The multivariate process indices need to be enhanced with more useful information and extensive application in the recent industrial fields. Hence, the purpose of the study is to develop a more effective multivariate process index (MCpI ) using the multivariate inverted normal loss function. The multivariate inverted normal loss function has the flexibility for the any type of the symmetrical and asymmetrical loss functions as well as the economic information. Especially, the proposed modeling method for the multivariate inverted normal loss function (MINLF) and the expected loss from MINLF in this paper can be applied to the any type of the symmetrical and asymmetrical loss functions. And this modeling method can be easily expanded from a bivariate case to a multivariate case.