Additive Manufacturing (AM) is a process that fabricates products by manufacturing materials according to a three-dimensional model. It has recently gained attention due to its environmental advantages, including reduced energy consumption and high material utilization rates. However, controlling defects such as melting issues and residual stress, which can occur during metal additive manufacturing, poses a challenge. The trial-and-error verification of these defects is both time-consuming and costly. Consequently, efforts have been made to develop phenomenological models that understand the influence of process variables on defects, and mechanical/ electrical/thermal properties of geometrically complex products. This paper introduces modeling techniques that can simulate the powder additive manufacturing process. The focus is on representative metal additive manufacturing processes such as Powder Bed Fusion (PBF), Direct Energy Deposition (DED), and Binder Jetting (BJ) method. To calculate thermal-stress history and the resulting deformations, modeling techniques based on Finite Element Method (FEM) are generally utilized. For simulating the movements and packing behavior of powders during powder classification, modeling techniques based on Discrete Element Method (DEM) are employed. Additionally, to simulate sintering and microstructural changes, techniques such as Monte Carlo (MC), Molecular Dynamics (MD), and Phase Field Modeling (PFM) are predominantly used.
Decontamination is one of the important processes for dismantling nuclear power plants. The purpose of decontamination is to reduce the radiation levels of contaminated nuclear facilities, ensuring the safety of workers involved in decommissioning and minimizing the amount of radioactive waste. In this study, we investigate the reaction mechanisms and their thermodynamic energies of the HyBRID (Hydrazine-Based Reductive participated metal Ion Decontamination) process for decontamination of the primary coolant system of a nuclear power plant. We computed the thermodynamic properties of HyBRID dissolution mechanisms in which corrosion metal oxides accumulated in the primary coolant systems along with radionuclides are dissolved by HyBRID decontamination agents (H2SO4/N2H4/CuSO4). The HyBRID reaction mechanism has been studied using a commercial database (HSC Chemistry®), but Cu ions have been used instead of Cu-hydrazine complexes when calculating reactions due to the absence of thermodynamic properties for Cu-hydrazine complexes. To address this limitation, we supplemented the quantum calculations with Cu-hydrazine complexes using the density functional calculations. It is intended to simulate a more practical reactions by calculating the reactions considering Cu-hydrazine complexes, and to improve understanding of the HyBRID dissolution reactions by qualitatively and quantitatively comparing the reactions without considering the complex formation.
A sample size calculation algorithm was developed in a prototype version to select inspection samples in domestic bulk handling facilities. This algorithm determines sample sizes of three verification methods satisfying target detection probability for defected items corresponding to one significant quantity (8 kg of plutonium, 75 kg of uranium 235). In addition, instead of using the approximation equation-based algorithm presented in IAEA report, the sample size calculation algorithm based on hypergeometric density function capable of calculating an accurate non-detection probability is adopted. The algorithm based the exact equation evaluates non-detection probability more accurately than the existing algorithm based on the approximation equation, but there is a disadvantage that computation time is considerably longer than the existing algorithm due to the large amount of computational process. It is required to determine sample size within a few hours using laptop-level performance because sample size is generally calculated with an inspector’s portable laptop during inspection activity. Therefore, it is necessary to improve the calculation speed of the algorithm based on the exact equation. In this study, algorithm optimization was conducted to improve computation time. In order to determine optimal sample size, the initial sample size is calculated first, and the next step is to perform an iterative process by changing the sample size to find optimal result. Most of the computation time occurs in sample size optimization process performing iterative computation. First, a non-detection probability calculation algorithm according to the sample sizes of three verification methods was improved in the iterative calculation process for optimizing sample size. A computation time for each step within the algorithm was reviewed in detail, and improvement approaches were derived and applied to some areas that have major effects. In addition, the number of iterative process to find the optimal sample size was greatly reduced by applying the algorithm based on the bisection method. This method finds optimal value using a large interval at the beginning step and reduces the interval size whenever the number of repetitions increases, so the number of iterative process is less than the existing algorithm using unit interval size. Finally, the sample sizes were calculated for 219 example cases presented by the IAEA report to compare computation time. The existing algorithm took about 15 hours, but the improved algorithm took only about 41 minutes using high performance workstation (about 22 times faster). It also took 87 minutes for calculating the cases using a regular laptop. The improved algorithm through this study is expected to be able to apply the sample size determination process, which was performed based on the approximate equation due to the complexity and speed issues of the past calculation process, based on the accurate equation.
지난 10년간 선박의 횡동요 복원력 상실에 의한 해양사고가 지속해서 증가하고 있어, 횡동요 운동을 효과적으로 줄일 수 있는 장치가 필요한 실정이다. 횡동요 감쇄 탱크는 단순한 설치만으로 횡동요 저감을 가져오는 대표적인 수동형 제어장치로 그 장점이 널리 알려져 있다. 따라서 본 연구에서는 U-튜브형 횡동요 감쇄 탱크의 수치해석 기법을 개발하고자 한다. 특히, 해석기법의 검증을 위해 자유 수면 높이를 실험을 통해 계측하였다. 수치해석기법은 메쉬 의존성, 난류모델 ( , , Reynolds Stress Model), 시간 간격 크기 및 반복 횟수 등의 영향을 비교하여 개발하였다. 최종적으로 개발된 해석기법은 Realizable 이 난류 모델에 10-2s 수준의 시간 간격 크기와 15 회의 반복횟수를 적용하였다. 2가지의 U-튜브형 감쇄 탱크의 조건에서 계측된 자유수면 높이를 이용하여 개발된 해석기법을 검증하였다. 본 연구의 수치해석은 RANS 기반 상용 해석 Solver인 STAR-CCM+ (ver. 17.02)을 이용하였다.
APro, developed by KAERI as a process-based total system performance assessment model, can simulate the radionuclide transport affected by thermal, hydraulic, mechanical and geochemical changes that may occurs in the engineering and natural barriers of a geological disposal system. APro targets a large-scale and heterogeneous 3D system that includes more than 10,000 boreholes located about 500 m underground and hundreds of fractures of different sizes distributed within an area of several km2. Simulating transport and reaction phenomena for such a system through the global implicit approach (GIA) may require considerable computational resources or be intractable in some cases. Therefore, APro adopts the sequential non-iterative approach (SNIA), one of the operator splitting (OS) methods, to separate the mass transport and reaction phenomena into independent problems. By using SNIA, the parallel computation performance in APro with multiple cores is expected to be improved. In this study, the effect of SNIA on the parallel computation performance was analyzed through a simple 1D reactive transport problem. Without SNIA, finite difference equations, discretized from the partial differential equations (PDEs) describing the reactive transport problem, have to be solved at once because all dependent variables are nonlinearly and spatially interconnected through reaction and mass transport terms. When the reaction and mass transport terms are separated through SNIA, the mass transport problem can be converted into independent linear equations for each chemical and the efficient linear system solver can be applied to each linear equation. In particular, since the reaction problem is changed to independent nonlinear equations for each node, the parallel computation performance can be greatly improved. To verify this, the 1D reactive transport problem was implemented in MATLAB, and SNIA and GIA were applied to solve the problem. As a result, there was no significant difference in results between SNIA and GIA for proper spatial and temporal discretization, which verified the accuracy of SNIA. In order to see the parallel computation performance, the calculation times for SNIA and GIA with increasing number of cores were measured and compared. As the number of cores increased, the SNIA calculation speed became faster than that of GIA, which verified that SNIA could improve parallel computation performance in APro. In the future, the effect of SNIA on the parallel computation performance will be verified for the numerical analysis of large-scale geological disposal systems.
Domestic as well as global automobile manufacturers are making greater efforts in cost reduction to strengthen the competitiveness. In this study, we developed a program to effectively manage standard time of automobile assembly line, and confirm based on A-automobile factory data. For the purpose, we develop the system which is possible to manage standard time as well as calculate man-hour for automobile assembly line.
In the present work, an explicit finite element analysis technique is introduced to analyze the thermal stress fields present in the additive manufacturing process. To this purpose, a finite element matrix formulation is derived from the equations of motion and continuity. The developed code, NET3D, is then applied to various sample problems including thermal stress development. The application of heat to an inclusion from an external source establishes an initial temperature from which heat flows to the surrounding body in the sample problems. The development of thermal stress due to the mismatch between the thermal strains is analyzed. As mass scaling can be used to shorten the computation time of explicit analysis, a mass scaling of 108 is employed here, which yields almost identical results to the quasi-static results.
This paper gives two graph-based algorithms for radioactive decay computation. The first algorithm identifies the connected components of the graph induced from the given radioactive decay dynamics to reduce the size of the problem. The solutions are derived over the precalculated connected components, respectively and independently. The second algorithm utilizes acyclic structure of radioactive decay dynamics. The algorithm evaluates the reachable vertices of the induced system graph from the initially activated vertices and finds the minimal set of starting vertices populating the entire reachable vertices. Then, the decay calculations are performed over the reachable vertices from the identified minimal starting vertices, respectively, with the partitioned initial value over the reachable vertices. Formal arguments are given to show that the proposed graph inspired divide and conquer calculation methods perform the intended radioactive decay calculation. Empirical efforts comparing the proposed radioactive decay calculation algorithms are presented.
PURPOSES : Thermal cracking (also called low-temperature cracking) is a serious stress for asphalt pavement, especially in eastern South Korea, the northern USA, and Canada. Thermal cracking occurs when the level of thermal stress exceeds the corresponding level of low temperature strength of the given asphalt materials. Therefore, computation of thermal stress is a key factor for understanding, quantifying, and evaluating the level of low-temperature cracking resistance of asphalt pavement. In this paper, two different approaches for computing thermal stress on asphalt binder were introduced: Hopkins and Hamming’s algorithm (1967) and the application of a simple power-law function. All the computed results were compared visually; then the findings and recommendations were discussed.
METHODS: Thermal stress of the tested asphalt binder was computed based on the methodology introduced in previous literatures related to viscoelastic theory. To perform the numerical analysis, MATLABTM 2D matrix-correlation and Microsoft Excel visual basic code were developed and used for the function fitting and value-minimization processes, respectively.
RESULTS : Different results from thermal stress were observed with application of different computation approaches. This variation of the data trends could be recognized not only visually but also statistically.
CONCLUSIONS: It can be concluded that these two different computation approaches can successfully provide upper and lower limits (i.e. boundaries) for thermal stress prediction of a given asphalt binder. Based on these findings, more reliable and reasonable thermal stress results could be provided and finally, better pavement performance predictions could also be expected.
Low temperature cracking on asphalt material is one of the serious distresses for asphalt pavement built in northern U.S., Europe and Canada. Thermal stress is a key factor for measuring (and estimating) the resistant capability of asphalt pavement against low temperature cracking. For this reason, many road agencies have recognized thermal stress as a crucial parameter for evaluating the low temperature performance of asphalt pavement materials. Thermal stress is conventionally computed through two steps. First, the relaxation modulus E(t) is generated thorough the conversion of the experimental creep compliance data D(t). Then thermal stress (T℃) is numerically estimated solving convolution integral. In this paper, a one-step approach to the calculation of thermal stress is proposed. This method is based on Laplace transformation. Thermal stress and corresponding critical cracking temperature obtained with single- and double-step procedure on a set of three mixtures are graphically and statistically compared. It is observed that the application of Laplace transformation provides reliable computation results of thermal stress compared to the conventional computation approach.
Theory of Earth’s free oscillation is revisited. Firstly, we summarized the underlying formulations, such as the equation of motion and its conversion into numerically integrable form and then explained computational procedures including the treatment of inner core-outer core boundary and core-mantle boundary, while the latter information has not been explicitly given in most publications. Secondly, we re-calculated the periods of Earth’s free oscillation modes (period >200 s) for PREM model. In doing so we acquired the values of modes missing in Dziewonski and Anderson (1981). As a case observation, one seismogram after 2011 Tohoku earthquake recorded at Daejeon, Korea (KIGAM seismic station) was briefly analyzed to identify free oscillation mode excitations on its spectra. The material in this article will be most clear guide for those on calculating the Earth’s free oscillation mode.
Among the information technology and automation that have rapidly developed in the manufacturing industries recently, tens of thousands of quality variables are estimated and categorized in database every day. The former existing statistical methods, or variable selection and interpretation by experts, place limits on proper judgment. Accordingly, various data mining methods, including decision tree analysis, have been developed in recent years. Cart and C5.0 are representative algorithms for decision tree analysis, but these algorithms have limits in defining the tolerance of continuous explanatory variables. Also, target variables are restricted by the information that indicates only the quality of the products like the rate of defective products. Therefore it is essential to develop an algorithm that improves upon Cart and C5.0 and allows access to new quality information such as loss cost. In this study, a new algorithm was developed not only to find the major variables which minimize the target variable, loss cost, but also to overcome the limits of Cart and C5.0. The new algorithm is one that defines tolerance of variables systematically by adopting 3 categories of the continuous explanatory variables. The characteristics of larger-the-better was presumed in the environment of programming R to compare the performance among the new algorithm and existing ones, and 10 simulations were performed with 1,000 data sets for each variable. The performance of the new algorithm was verified through a mean test of loss cost. As a result of the verification show, the new algorithm found that the tolerance of continuous explanatory variables lowered loss cost more than existing ones in the larger is better characteristics. In a conclusion, the new algorithm could be used to find the tolerance of continuous explanatory variables to minimize the loss in the process taking into account the loss cost of the products.1)
본 논문에서는 이종재질로 구성된 세장비가 큰 보의 차원축소와 복원의 효율성과 정확성을 입증하기 위하여 VABS와 3차원 유한요소해석 모델의 결과와 비교하였다. 그리고 3차원 유한요소모델과 차원축소 모델을 가상균열닫힘법을 이용하여 에너지 해방률을 계산하였다. 원형과 사각형의 단면에 초기 크랙을 가진 수치모델을 이용하여 보의 차원축소와 복원기법 및 가상균 열닫힘법을 이용하여 복원해석 결과 및 에너지 해방률을 비교하여 효율성과 정확성을 입증하였다. 특히 제시된 에너지 해방률 계산 기법은 고고도 무인기의 날개, 헬리콥터 로터 블레이드, 풍력 블레이드, 틸트로터 등의 정적, 동적 모델링 및 수명평가에 활용될 수 있을 것이다.
Three-dimensional motion analysis systems and force plates are used for gait phase detection and ground reaction force(GRF) or ground reaction moment(GRM) data collection. But force plates as measurement systems are restricted to laboratory environments and have limitations for abnormal walking with foot dragging. Therefore the methods were proposed to compute the GRM from the kinematic data of three-dimensional motion analysis systems. Gait phase detection with kinematic data using foot velocity algorithm(FVA) was performed. The validity of gait phase detection was obtained comparing with the results of force plates. A gait model with 7 segments was composed to compute GRM. The results of gait model with kinematic data were compared with those of force plates. The result of the vertical direction is similar. But anterior-posterior direction and lateral direction show similar tendency with some gaps.
Korean automobile industrial is in a difficult situation because of more competitive global market and lower demand. Therefore, domestic as well as global automobile manufacturers are making greater efforts in cost reduction to strengthen the competitiveness. According to statistical data, logistics cost in domestic manufacturers is higher than advanced countries. In this study, we developed program to effectively manage standard time of procurement logistics, and confirm based on A-automobile factory data. For the purpose, we develop the system which is possible to manage standard time as well as calculate man-hour. Program is not just for calculating and managing standard man-hour, scenarios analysis function will be added to calculate benefit while introduce logistics automated equipment. In this study we propose scenario using AGV instead of electric motor while move component. In the scenario analysis, job constitution is changed,and then we use system to compare the result. We can confirm standard man-hour is reduced from 22.3M/H to 14.3M/H. In future research, it is necessary scenario analysis function, and develop algorithm with realistic constraint condition.
The soybean aphid, Aphis glycines Matsumura, was introduced about a decade ago from Asia into North America where it has become a serious pest of soybeans. This invasive pest has rapidly spread throughout the midwestern United States and southern Canada since 2000. We examined 689 individuals obtained from 23 different collections in USA, Korea, China and Japan for testing eight microsatellite loci. To estimate the relative likelihood of alternative introduction scenarios of the soybean aphid, an approximate Bayesian computation (ABC) was performed for microsatellite data as implemented in DIYABC. The ABC analysis tested for three scenarios (B1, B2, and B3) hypothesizing an introduction from one of the Asian countries. In this analysis, scenario B1 obtained the highest posterior probability ranging from 0.90 to 0.91 with a 95% CI of 0.81-0.99 and 0.88-0.94, which assumes an introduction from Korea. The ABC analysis revealed that one hypothetical scenario, which assumes an introduction of the soybean aphid into USA from Korea, was very robustly suggested than the other two scenarios (from China or Japan).