서식지에 따른 토양 절지동물 풍부도 및 딱정벌레 다양성 연구를 위해 세 종류의 시료(낙엽층과 토양 혼합, 낙엽층, 토양 시료)를 확보 후 체 털어잡기 하였다. 2023년 6월부터 8월까지 강원도와 충청북도에서 혼합 시료, 낙엽층 시료, 토양 시료 각각 25개씩 총 75개의 시료를 확보하였 다. 딱정벌레목 다양성을 확인한 결과, 세 가지 시료 중에서 혼합 시료에서 가장 높은 풍부도, 종 부유도, Shannon 지수가 나타났다. 이 논문에서 는 체털어잡기의 생태학적 활용 가능성에 대해 논의해 보았다. 토양성 절지동물의 채집과 시료 종류 사이의 연구가 지속적으로 이루어진다면, 최 종적으로 토양생태계 연구의 발전과 이해를 높일 수 있을 것으로 기대된다.
격납건물은 원자력 발전소의 중대 사고 발생시 방사성 물질의 외부 방출을 막는 심층 방어 체계 중 마지막 방벽이다. 중대사고 발생시 격납건물 내부에선 노심 융해와 수소 발생으로 인한 내압 상승과 증기 폭발로 인한 구조적 손상이 일어나며, 이에 대한 구조적 건전성을 평가하기 위해 격납건물에 대 한 극한 내압 성능 평가를 실시한다. 극한 내압 성능 평가 방법 중 확률론적 평가시 현실적인 제약으 로 인해 고신뢰도 유한요소해석 모델을 이용하며 이때에 불확실성 인자들의 확률 분포 특성을 고려한 데이터 셋을 샘플링 기법을 이용하여 구성한 후 비선형 해석을 실시한다. 도출된 비선형 해석 결과는 취약도 곡선을 도출에 사용되며, 취약도 곡선을 이용하여 확률론적인 평가가 실시된다. 샘플링 기법에 따라 적절한 표본 크기가 아닌 데이터셋을 구성하게 되면 통계적 불확실성으로 인한 취약성 분석의 오차가 증대된다. 하지만 유한요소해석시 발생하는 막대한 계산 비용으로 인하여 기존의 방식은 적절 한 샘플링 크기 선정 및 부적절한 샘플링 크기 선정으로 인한 확률론적인 성능평가에 대한 영향에 대 한 정량화 및 평가를 제한적으로 수행하였다. 따라서 본 연구에서는 격납건물의 재료적 특성 및 내압 으로 인한 변위 데이터를 기반으로 생성한 인공신경망 모델을 통해 유한요소 해석에 대한 대리모델을 생성한다. 이후 생성한 대리모델을 기반으로 일반적인 불확실성 분포 샘플링에 사용되는 Monte Carlo method, latin hypercube sampling, Sobol sequence을 이용하여 표본 크기에 따른 격납건물 확률론적 인 극한내압성능 평가에 대한 영향을 정량화 및 평가를 실시하겠다. 이를 통해 제한적으로 탐색되었던 불확실성 공간에 대하여, 그 통계적 불확실성 및 전방위적인 탐색이 가능해 질것으로 기대한다.
In the post-COVID-19, the food industry is rapidly reshaping its market structure toward online distribution. Rapid delivery system driven by large distribution platforms has ushered in an era of online distribution of fresh seafood that was previously limited. This study surveyed 1,000 consumers nationwide to determine their online seafood purchasing behaviors. The research methodology used factor analysis of consumer lifestyle and Heckman’s ordered probit sample-selection model. The main results of the analysis are as follows. First, quality, freshness, selling price, product reviews from other buyers, and convenience are particularly important considerations when consumers purchase seafood from online shopping. Second, online retailers and the government must prepare measures to expand seafood consumption by considering household characteristics and consumer lifestyles. Third, it was analyzed that consumers trust the quality and safety of seafood distributed online platforms. It is not possible to provide purchase incentives to consumers who consider value consumption important, so improvement measures are needed. The results of this study are expected to provide implications on consumer preferences to online platforms, seafood companies, and producers, and can be used to establish future marketing strategies.
Scanning probe microscopy (SPM) has become an indispensable tool in efforts to develop the next generation of nanoelectronic devices, given its achievable nanometer spatial resolution and highly versatile ability to measure a variety of properties. Recently a new scanning probe microscope was developed to overcome the tip degradation problem of the classic SPM. The main advantage of this new method, called Reverse tip sample (RTS) SPM, is that a single tip can be replaced by a chip containing hundreds to thousands of tips. Generally for use in RTS SPM, pyramid-shaped diamond tips are made by molding on a silicon substrate. Combining RTS SPM with Scanning spreading resistance microscopy (SSRM) using the diamond tip offers the potential to perform 3D profiling of semiconductor materials. However, damage frequently occurs to the completed tips because of the complex manufacturing process. In this work, we design, fabricate, and evaluate an RTS tip chip prototype to simplify the complex manufacturing process, prevent tip damage, and shorten manufacturing time.
In the case of the Pohang earthquake, which had a magnitude of 5.4 in 2017, geotechnical damages such as liquefaction and ground settlement occurred. The need for countermeasures has emerged, and experimental research in the Pohang area has continued. This study collected undisturbed samples from damaged fine-grained soil areas where ground settlement occurred in Pohang. Cyclic tri-axial tests for identifying the dynamic characteristics of soils were performed on the undisturbed samples, and the results were analyzed to determine the cause of ground settlement. As a result of the study, it was determined that in the case of fine-grained soils, ground settlement occurred because the seismic load as an external force was relatively more significant than the shear resistance of the very soft fine-grained soils, rather than due to an increase in excess pore water pressure.
Typically, the bottom of the effluent treatment facility at a nuclear power plant contains sediment, which is low-contamination waste consisting of sludge, gravel, sand, and other materials from which radioactive contaminants have been removed. Among these sediments, sludge is an irregular solid form consisting of small particles that are coagulated together, with radioactive isotopes containing cobalt attached. Currently, there is a record of disposing of dry active waste from domestic nuclear power plants, and efforts are underway to gather basic data for the disposal of untreated waste such as sludge, spent filter, and spent resin. In particular, the classification and disposal methods of waste will be determined based on the radioactivity concentration. Therefore, plans are being made to determine the radioactivity concentration of radioactive isotopes and establish disposal plans for sludge samples. In this study, pre-treatment and solutionization were carried out for the analysis of radioactive isotopes in sludge sampels from nuclear power plants. The deviation of the gamma radioisotope analysis results was derived to obtain an optimal sample quantity that represents the sludge.
As part of strengthening pyro safety measures, the Korea Atomic Energy Research Institute is developing LIBS (Laser-Induced Breakdown Spectroscopy) application technology to analyze molten salt components in electrolytic recovery device in real time. LIBS performs qualitative and quantitative analysis by analyzing the spectrum of energy emitted by atomizing and ionizing elements on the surface of a salt sample with a high-focused laser. Since salt easily corrodes metal, it must be managed in an environment with a dew point of -40°C or lower. In this study, we designed and manufactured a device that places a rod-type sampling stick on a mounting base, automatically moves it to the optimal measurement position for LIBS, and retrieves the sample. Its characteristics are as follows. First, LIBS is stationary and does not move. Second, the sample stick is placed on a mounting base and can rotate 360 degrees. Third, according to the command, the sample stick automatically moves to the optimal measurement position of LIBS with three degrees of freedom (X, Y, Z). Fourth, the salt attached to the sampling stick is recovered for chemical analysis by driving the gripper mounted at the bottom of the Z axis, Z axis, and rotation axis (R). The X, Y, and Z movement distances of this device are each 100 mm, rotation is 360 degrees, grip stroke is 50 mm, and position accuracy is ±20 m. Once the performance test of the automated salt sample analysis device is completed, it will be installed in a dry room with a dew point of - 40°C or lower. Samples will be collected remotely in connection with the electrolytic recovery device and gantry robot built in the dry room. We plan to conduct experiments to seat the sample stick. Ultimately, we plan to conduct comprehensive experiments in conjunction with LIBS.
Recently, the status of North Korea’s denuclearization has become an international issue, and there are also indications of potential nuclear proliferation among neighboring countries. So, the need for establishment of nuclear activity verification technology and strategy is growing. In terms of ensuring verification completeness, sample collection-based analysis is essential. The concepts of Chain of Custody (CoC) and Continuity of Knowledge (CoK) can be defined in the process of sample extraction as follows: CoC is interpreted as the ‘system for managing the flow of information subjected by the examinee’, and CoK is interpreted as the ‘Continuity of information collection through CoC subjected by the inspector’. In the case of sample collection process in unreported areas for nuclear activity verification, there are additional risks such as worker exposure/kidnapping or sample theft/tampering. Therefore, the introduction of additional devices might be required to maintain CoC and CoK in the unreported area. In this study, an Environmental Geometrical Data Transfer (EGDT) was developed to ensure the safety of workers and the CoC/CoK of the samples during the collection process. This device was designed for achieving both mobility and rechargeability. It is categorized into two modes based on its intended users: sample mode and worker mode. Through the sensors, which is positioned in the rear part of device, such as radiation, gyroscope, light, temperature, humidity and proximity sensors, it can be easily achievable various environmental information in real-time. Additionally, GPS information can also be received, allowing for responsiveness to various hazardous scenarios. Moreover, the OLED display positioned on the front gives us for checking device information such as the current status of the device such as the battery level, the connectivity of wifi, and etc. Finally, an alarm function was integrated to enable rapid awareness during emergency situations. These functions can be updated and modified through Arduino-based firmware, and both the device and the information collected through it can be remotely controlled via custom software. Based on the presented design conditions, a prototype was developed and field assessments were conducted, yielding results within an acceptable margin of error for various scenarios. Through the application of the EGDT developed in this study to the sample collection process for nuclear activity verification purposes, it is expected to achieve a stable maintenance of CoC/CoK through more accurate information transmission and reception.
To analyze the activity concentration of radionuclides in radioactive sludge samples generated from low- and intermediate-low-level radioactive waste from domestic nuclear power plant, a pretreatment process that dissolves and homogenizes the sample is essential. However, this pretreatment process requires the use of hydrofluoric acid, which makes analysis difficult and challenges users to handle harmful chemicals. Therefore, we aim to minimize the use of hydrofluoric acid by measuring gamma nuclides in the sludge sample without pretreatment process and compare the differences of measurement results according to the sample matrix with and without pretreatment process. We will collect about 0.1 g of the sludge sample, and dissolve it using an acid treatment process after using microwave decomposition. We will then use gamma spectroscopy to check the concentration of nuclides present in the sludge before and after dissolution and consider the effect of the sample matrix.
A sample size calculation algorithm was developed in a prototype version to select inspection samples in domestic bulk handling facilities. This algorithm determines sample sizes of three verification methods satisfying target detection probability for defected items corresponding to one significant quantity (8 kg of plutonium, 75 kg of uranium 235). In addition, instead of using the approximation equation-based algorithm presented in IAEA report, the sample size calculation algorithm based on hypergeometric density function capable of calculating an accurate non-detection probability is adopted. The algorithm based the exact equation evaluates non-detection probability more accurately than the existing algorithm based on the approximation equation, but there is a disadvantage that computation time is considerably longer than the existing algorithm due to the large amount of computational process. It is required to determine sample size within a few hours using laptop-level performance because sample size is generally calculated with an inspector’s portable laptop during inspection activity. Therefore, it is necessary to improve the calculation speed of the algorithm based on the exact equation. In this study, algorithm optimization was conducted to improve computation time. In order to determine optimal sample size, the initial sample size is calculated first, and the next step is to perform an iterative process by changing the sample size to find optimal result. Most of the computation time occurs in sample size optimization process performing iterative computation. First, a non-detection probability calculation algorithm according to the sample sizes of three verification methods was improved in the iterative calculation process for optimizing sample size. A computation time for each step within the algorithm was reviewed in detail, and improvement approaches were derived and applied to some areas that have major effects. In addition, the number of iterative process to find the optimal sample size was greatly reduced by applying the algorithm based on the bisection method. This method finds optimal value using a large interval at the beginning step and reduces the interval size whenever the number of repetitions increases, so the number of iterative process is less than the existing algorithm using unit interval size. Finally, the sample sizes were calculated for 219 example cases presented by the IAEA report to compare computation time. The existing algorithm took about 15 hours, but the improved algorithm took only about 41 minutes using high performance workstation (about 22 times faster). It also took 87 minutes for calculating the cases using a regular laptop. The improved algorithm through this study is expected to be able to apply the sample size determination process, which was performed based on the approximate equation due to the complexity and speed issues of the past calculation process, based on the accurate equation.
In liquid scintillation counting, sample radioactivity is analyzed by measuring photons emitted from counting vials. Quenching effect lowers photon intensity from samples, which leads to lower counting efficiency. So an appropriate quenching correction according to characteristics of samples is important. In this study, the quenching correction for H-3 analysis was conducted according to the characteristics of paper packaging material leached samples. The leached samples are made from H-3 leaching method which is in the process of development for H-3 contamination screening. There are several ways of quenching correction such as internal standard (IS) method, quench correction curve and triple-to-double coincidence ratio (TDCR) method, etc. For quench correction curve, quenched standard set, which has the same matrix as experimental samples, is needed to be prepared. Each leached sample, however, has different matrix and color depending on condition of leaching experiment, which means that it is not capable of preparing standard set having same matrix with the samples. In this study, the counting samples are used for plotting quench correction curve instead of quenched standard set. Spectral quench parameter of the external standard [SQP(E)] is used as quench indicating parameter (QIP). TDCR and counting efficiencies determined by IS method are used as counting efficiencies. The quench curve of TDCR versus SQP(E) has R2 = 0.55 and the curve of efficiency from IS method versus SQP(E) has R2 = 0.99. TDCR is known for approximate counting efficiency, however, TDCR as counting efficiency needs careful use for H-3 analysis of leached samples. The curve used efficiency from IS method is suitable for H-3 analysis of leached samples. In this study, the quench correction curve is prepared for H-3 analysis of leached samples of paper packaging material. SQP(E), TDCR and efficiency from IS method was used as parameters to plot the quench correction curve, and, the efficiency from IS method is suitable for H-3 analysis of the leached samples. The result of this study can be used for H-3 analysis of leached samples of paper packaging material.
The measurement activities to evaluate material balance of nuclear material are usually performed by operator. It is because that the IAEA does not have enough manpower to carry out nuclear measurement accountancy of all nuclear materials in the world. Therefore, the IAEA should consider scenarios which facility operator tries to divert nuclear material for misuse by distorting measurement record. It is required to verify the operator’s measurement data whether it is normal or not. IAEA measures inventory items using their own equipment which is independent of facility operator equipment for verification. Since all inventory lists cannot be verified due to limited resources, the number of items to be verified is determined through statistical method which is called as sample size calculation. They measure for the selected items using their own equipment and compares with operator’s record. The IAEA determines sample size by comprehensively considering targeted diverted nuclear material amount and targeted non-detection probability and performance of measurement equipment. In general, the targeted diverted nuclear material amount is considered significant quantity (plutonium: 8 kg, uranium-235: 75 kg). If the targeted non-detection probability or the performance of the verification equipment is low, the sample size increases, and on the contrary, in the case of high non-detection probability or good performance of verification equipment, even a small sample size is satisfied. It cannot be determined from a single sample size calculation because there are so many sample size combinations for each verification equipment and there are many diversion scenarios to be considered. So, IAEA estimates initial sample size based on statistical method to reduce calculation load. And then they calculate non-detection probability for a combination of initial sample size. Through the iteration calculation, the sample size that satisfies the closest to the target value is derived. The sample size calculation code has been developed to review IAEA’s calculation method. The main difference is that IAEA calculates sample size based on approximate equation, while in this study, sample size is calculated by exact equation. The benchmarking study was performed on reference materials. The data obtained by the code show similar results to the reference materials within an acceptable range. The calculation method developed in this study will be applied to support IAEA and domestic inspection activities in uranium fuel fabrication facility.
Appropriateness of the minimum detectable activity in the analysis of gamma radionuclides is very important. This is reason determine the time factor among the conditions of the analysis when it is rationally determined has the advantage that radioactivity analysis can be performed accurately and quickly. In this study, 100 mL of an unknown sample was diluted in Marinelli Beaker 1L to obtain, review data on gamma radiation analysis results and minimum detectable activity for each measurement time. The measurement was used High Purity Germanium detector, target nuclides are Co-57, Co-58, Y-88 and Cs-137. Since the radioactivity analysis sample will be expected to be the waste subject to selfdisposal or less during the radioactive waste classification, the minimum detectable activity standard was set based on the detection of less than the permissible activity for self-disposal for each nuclide. The measurement methods were measured by classifying it into seven categories: 1000 seconds, 3600 seconds, 10000 seconds, 30000 seconds, 80000 seconds, 100000 seconds, and 150000 seconds. The radioactivity from this measurement are Co-57 2.89 Bq·g−1, Co-58 0.19 Bq·g−1, Y-88 0.20 Bq·g−1, Cs-137 0.15 Bq·g−1, the measurement results under all conditions were similar. On the other hand, the minimum detectable activity showed values above the allowable activity for self-disposal in not but Co-58 at 1000 and 3600 seconds. Only after taking the measurement time of 10000 seconds, the result was derived Co-57 0.0095 Bq·g−1, Co-58 0.0068 Bq·g−1, Y-88 0.0052 Bq·g−1, Cs-137 0.0062 Bq·g−1, which was confirmed to less than the allowable activity for self-disposal by nuclide. Reasonably determining the measurement time in gamma radionuclide analysis is a very important issue in terms of economy of time and accuracy of measurement. Although this study cannot be said to be able to determine a reasonable measurement time for all gamma radionuclide analysis, it is hoped that research on various samples will be made to contribute to the efficient measurement of gamma radioactivity.