For safeguarding dry storage facilities, a tomography system based on fast and thermal neutron detection was studied in Korea Institute of Nuclear Nonproliferation and Control. The study was conducted laboratory-scale experiments based on a custom built 1/10th scale model cask, He-4 gas scintillation detector array, and multiple 252Cf sources. A filtered back projection (FBP) was utilized to obtain the cask image via MATLAB. The Ram-Lak filter (ramp filter) was employed in FBP for improved the reconstructed image quality. The Ram-Lak filter is the increasing amplitude filter due to the increasing spatial frequency of the image. In spatial frequency, the frequency of brightness change in the low-frequency region is relatively low, and the frequency of brightness change in the highfrequency region is large. Thus, the high-frequency region in the neutron tomographic image is near the neutron sources and the cask, and the low-frequency region is outside of the cask and/or between the source and cask in the study. In order to apply the ramp filter, a Fourier transform is initially performed on projection data, and image reconstruction is performed with the corrected projection data. In this case, the filter is linearly changed. Therefore, a small filter value is applied at lower spatial frequencies to reduce the projection data, and a large filter value is applied at high spatial frequencies to reduce the projection data. The filter scale is a fraction of frequency amplitude, and the filter value applied to the projection data is determined according to the filter scale. This study was conducted for discussion of the image quality due to the effect of the filter scale used for image reconstruction of a neutron tomography system. The results show that in the experiment with one source, the source location was founded when we used the frequency scale of 0.5 and over. In the double or triple source experiment, the source locations and relative activities were found when we used a filter scale of 0.4 to 0.6. When the filter frequency scale of 0.7 to over, the relative activities are hard to know. It can be found that if the filter value is too large or too small, distortion may occur in the reconstruction results. Therefore, it seems reasonable to set a value between 0.4 and 0.6 as the scaling factor for the neutron tomography system. In the future, additional comparative studies will perform validation of the frequency scaling methods.
The overestimation and underestimation of the radioactivity concentration of difficult-to-measure radionuclides can occur during the implementation of the scaling factor (SF) method because of the uncertainties associated with sampling, radiochemical analysis, and application of SFs. Strict regulations ensure that the SF method as an indirect method does not underestimate the radioactivity of nuclear wastes; however, there are no clear regulatory guidelines regarding the overestimation. This has been leading to the misuse of the SF methodology by stakeholders such as waste disposal licensees and regulatory bodies. Previous studies have reported instances of overestimation in statistical implementation of the SF methodology. The analysis of the two most popular linear models of the SF methodology showed that severe overestimation may occur and radioactivity concentration data must be dealt with care. Since one major source of overestimation is the use of minimum detectable activity (MDA) values as true activity values, a comparative study of instrumental techniques that could reduce the MDAs was also conducted. Thermal ionization mass spectrometry was recommended as a suitable candidate for the trace level analysis of long-lived beta-emitters such as iodine-129. Additionally, the current status of the United States and Korea was reviewed from the perspective of overestimation.
The radionuclide inventory in radioactive waste from nuclear power plants should be determined to secure the safety of final repositories. As an alternative to time-consuming, labor-intensive, and destructive radiochemical analysis, the indirect scaling factor (SF) method has been used to determine the concentrations of difficult-to-measure radionuclides. Despite its long history, the original SF methodology remains almost unchanged and now needs to be improved for advanced SF implementation. Intense public attention and interest have been strongly directed to the reliability of the procedures and data regarding repository safety since the first operation of the low- and intermediate-level radioactive waste disposal facility in Gyeongju, Korea. In this review, statistical methodologies for SF implementation are described and evaluated to achieve reasonable and advanced decision-making. The first part of this review begins with an overview of the current status of the scaling factor method and global experiences, including some specific statistical issues associated with SF implementation. In addition, this review aims to extend the applicability of SF to the characterization of large quantities of waste from the decommissioning of nuclear facilities.
We introduce a depth scaling strategy to improve the accuracy of frequency-domain elastic full waveform inversion (FWI) using the new pseudo-Hessian matrix for seismic data without low-frequency components. The depth scaling strategy is based on the fact that the damping factor in the Levenberg-Marquardt method controls the energy concentration in the gradient. In other words, a large damping factor makes the Levenberg-Marquardt method similar to the steepest-descent method, by which shallow structures are mainly recovered. With a small damping factor, the Levenberg-Marquardt method becomes similar to the Gauss-Newton methods by which we can resolve deep structures as well as shallow structures. In our depth scaling strategy, a large damping factor is used in the early stage and then decreases automatically with the trend of error as the iteration goes on. With the depth scaling strategy, we can gradually move the parameter-searching region from shallow to deep parts. This flexible damping factor plays a role in retarding the model parameter update for shallow parts and mainly inverting deeper parts in the later stage of inversion. By doing so, we can improve deep parts in inversion results. The depth scaling strategy is applied to synthetic data without lowfrequency components for a modified version of the SEG/EAGE overthrust model. Numerical examples show that the flexible damping factor yields better results than the constant damping factor when reliable low-frequency components are missing.
Final disposal of radioactive waste generated from Nuclear Power Plant (NPP) requires the detailed information about the characteristics and the quantities of radionuclides in waste package. Most of these radionuclides are difficult to measure and expensive to assay. Thus it is suggested to the indirect method by which the concentration of the Difficult-to-Measure (DTM) nuclide is estimated using the correlations of concentration - it is called the scaling factor - between Easy-to-Measure (Key) nuclides and DTM nuclides with the measured concentration of the Key nuclide. In general, the scaling factor is determined by the log mean average (LMA) method and the regression method. However, these methods are inadequate to apply to fission product nuclides and some activation product nuclides such as 14 and 90 . In this study, the artificial neural network (ANN) method is suggested to improve the conventional SF determination methods - the LMA method and the regression method. The root mean squared errors (RMSE) of the ANN models are compared with those of the conventional SF determination models for 14 and 90 in two parts divided by a training part and a validation part. The SF determination models are arranged in the order of RMSEs as the following order: ANN model
원전에서 발생된 중저준위 방사성 폐기물의 경우 처분장으로 이송되기 이전에 드럼에 대한 세부적인 정보 특히 핵종 재고량에 대한 평가가 수행되어야 한다. 그러나 드럼처리된 방사성폐기물의 경우 평가 대상 핵종 농도에 대한 예측이 어려운 것이 일반적이다. 따라서 이를 극복하고자 직접측정이 어려운 경우 척도인자 방법을 활용하고 있다. 국내의 경우 1996년부터 고리원전에서 척도인자 개념이 적용된 핵종분석장치를 운영해오고 있다. 그러나 고리원전에 적용된 척도인자의 경우 많은 개선의 여지가 남겨져 있다. 따라서 현재 척도인자의 향상을 위한 연구가 진행 중에 있다. 본 논문에서는 연구의 범위에 대한 개략적인 소개와 핵종 재고량 평가 방법 중 보다 신뢰할 수 있는 평가 방법을 찾고자 통계적인 척도인자 평가 방법을 비교 평가했으며 이를 통해 고리원전에 사용된 산술평균 방법을 기하평균 방법으로 바꾸는 것이 예측의 정확성을 향상시킬 수 있을 뿐만 아니라 드럼내 핵종 재고량의 과대평가를 막고 합리적인 보수성을 유지할 수 있음을 알수 있었다.