검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 2,401

        61.
        2023.07 구독 인증기관 무료, 개인회원 유료
        As millennials are continuously growing; hence they are considered today's fine jewelry business treasure. Understanding why they buy fine jewelry using the means-end theory is the key objective of this study. Brand, function and beauty are means that lead to happiness and self-confidence, the end value of millennials mindset
        4,000원
        62.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        From 2020, Korean Animal and Plant Quarantine Agency has reset the withdrawal time (WT) for veterinary drugs typically used in livestock in preparation for the introduction of positive list system (PLS) program in 2024. This study was conducted to reset the MRL for amprolium (APL) in broiler chickens as a part of PLS program introduction. Forty-eight healthy Ross broiler chickens were orally administered with APL at the concentration of 60 mg/L (APL-1, n=24) for 14 days and 240 mg/L (APL-2, n=24) for 7 days through drinking water, respectively. After the drug treatment, tissue samples were collected from six broiler chickens at 0, 1, 3 and 5 days, respectively. Residual APL concentrations in poultry tissues were determined using LC-MS/MS. Correlation coefficient (0.99 >), the limits quantification (LOQ, 0.3~5.0 μg/kg), recoveries (81.5~112.4%), and coefficient of variations (<15.5%) were satisfied the validation criteria of Korean Ministry of Food and Drug Safety. In APL-1, APL in all tissues except for kidney was detected less than LOQ at 3 days after drug treatment. In APL-2, APL in liver and kidney was detected more than LOQ at 5 days after treatment. According to the European Medicines Agency’s guideline on determination of withdrawal periods, withdrawal periods of APL-1 and APL-2 in poultry tissues were established to 3 and 2 days, respectively. In conclusion, the developed analytical method is sensitive and reliable for detecting APL in poultry tissues. The estimated WT of APL in poultry tissues is longer than the current WT recommendation of 2 days for APL in broiler chickens.
        4,000원
        63.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        In this study, we analyze a finite-buffer M/G/1 queueing model with randomized pushout space priority and nonpreemptive time priority. Space and time priority queueing models have been extensively studied to analyze the performance of communication systems serving different types of traffic simultaneously: one type is sensitive to packet delay, and the other is sensitive to packet loss. However, these models have limitations. Some models assume that packet transmission times follow exponential distributions, which is not always realistic. Other models use general distributions for packet transmission times, but their space priority rules are too rigid, making it difficult to fine-tune service performance for different types of traffic. Our proposed model addresses these limitations and is more suitable for analyzing communication systems that handle different types of traffic with general packet length distributions. For the proposed queueing model, we first derive the distribution of the number of packets in the system when the transmission of each packet is completed, and we then obtain packet loss probabilities and the expected number of packets for each type of traffic. We also present a numerical example to explore the effect of a system parameter, the pushout probability, on system performance for different packet transmission time distributions.
        4,800원
        64.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The cutting process, which is a key processing technology in various industrial fields is achieving continuous growth, and the demand for high-quality cutting surfaces is continuously demanded. Plasma cutting continues to be studied for its excellent workability and productivity, but problems with cutting surface quality such as dross formation occur, so research to secure excellent cutting surface quality through appropriate control of process variables is essential. In this study, we propose a method for predicting surface roughness using real-time current and cutting speed data obtained while performing plasma cutting on A106 B steel pipe. Surface roughness was predicted based on the RBF algorithm applicable to prediction and control models. It was shown that the surface roughness of the plasma cutting surface can be predicted with the arc current waveform and process speed data. This study can be used as a basic study to control the surface roughness of the cut surface in real time.
        4,000원
        65.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The entire industry is increasing the use of big data analysis using artificial intelligence technology due to the Fourth Industrial Revolution. The value of big data is increasing, and the same is true of the production technology. However, small and medium -sized manufacturers with small size are difficult to use for work due to lack of data management ability, and it is difficult to enter smart factories. Therefore, to help small and medium -sized manufacturing companies use big data, we will predict the gross production time through machine learning. In previous studies, machine learning was conducted as a time and quantity factor for production, and the excellence of the ExtraTree Algorithm was confirmed by predicting gross product time. In this study, the worker's proficiency factors were added to the time and quantity factors necessary for production, and the prediction rate of LightGBM Algorithm knowing was the highest. The results of the study will help to enhance the company's competitiveness and enhance the competitiveness of the company by identifying the possibility of data utilization of the MES system and supporting systematic production schedule management.
        4,000원
        66.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        원전 구조물의 실시간 모니터링 기술이 요구되고 있지만, 현재 운영 중인 지진 감시계통으로는 동특성 추출 등 시스템 식별이 제한 된다. 전역적인 거동 데이터 및 동특성 추출을 위해서는 다수의 센서를 최적 배치하여야 한다. 최적 센서배치 연구는 많이 진행되어 왔 지만 주로 토목, 기계 구조물이 대상이었으며 원전 구조물 대상으로 수행된 연구는 없었다. 원전 구조물은 미미한 신호대잡음비에도 강건한 신호를 획득하여야 하며, 모드 기여도가 저차 모드에 집중되어 있어 모드별 잡음 영향을 고려해야 하는 등 구조물 특성을 고려 해야 한다. 이에 본 연구에서는 잡음에 대한 강건도와 모드별 영향을 평가할 수 있는 최적 센서배치 방법론을 제시하였다. 활용한 지표 로서 auto MAC(Modal Assurance Criterion), cross MAC, 노드별 모드형상 분포를 분석하였으며, 잡음에 대한 강건도 평가의 적합성을 수치해석으로 검증하였다.
        4,000원
        67.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        We used the measurement data derived from a proton transfer reaction time-offlight mass spectrometry (PTR-ToF-MS) to ascertain the source profile of volatile organic compounds (VOCs) from 4 major industrial classifications which showed the highest emissions from a total of 26 industrial classifications of A industrial complex. Methanol (MOH) was indicated as the highest VOC in the industrial classification of fabricated metal manufacture, and it was followed by dichloromethane (DM), ethanol (EN) and acetaldehyde (AAE). In the industrial classification of printing and recording media, the emission of ethylacetate (EA) and toluene (TOL) were the highest, and were followed by acetone (ACT), ethanol (EN) and acetic acid (AA). TOL, MOH, 2-butanol (MEK) and AAE were measured at high concentrations in the classification of rubber and plastic manufacture. In the classification of sewage, wastewater and manure treatment, TOL was the highest, and it was followed by MOH, H2S, and ethylbenzene (EBZ). In future studies, the source profiles for various industrial classifications which can provide scientific evidence must be completed, and then specified mitigation plans of VOCs for each industrial classification should be established.
        4,600원
        68.
        2023.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This paper is proposing a novel machine scheduling model for the unrelated parallel machine scheduling problem without setup times to minimize the total completion time, also known as “makespan”. This problem is a NP-complete problem, and to date, most approaches for real-life situations are based on the operator’s experience or simple heuristics. The new model based on the Memetic Algorithm, which was proposed by P. Moscato in 1989, is a hybrid algorithm that includes genetic algorithm and local search optimization. The new model is tested on randomly generated datasets, and is compared to optimal solution, and four scheduling models; three rule-based heuristic algorithms, and a genetic algorithm based scheduling model from literature; the test results show that the new model performed better than scheduling models from literature.
        4,000원
        70.
        2023.05 구독 인증기관·개인회원 무료
        The Korea Atomic Energy Research Institute (KAERI) employs a methodology for evaluating the concentration of radionuclides, dividing them into volatile and non-volatile nuclides based on their characteristics, to ensure the permanent disposal of internally generated radioactive waste. Gamma spectroscopy enables the detection and radiation concentration determination of individual nuclides in samples containing multiple gamma-emitting nuclides. Due to the stochastic nature of radioactive decay, the generated radiation signal can interact with the detector faster than the detected signal processing time, causing dead time in the gamma spectroscopy process. Radioactive waste samples typically exhibit higher radiation levels than environmental samples, leading to long dead times during the measurement process, consequently reducing the accuracy of the analysis. Therefore, dead time must be considered when analyzing radioactive waste samples. During the measurement process, dead time may vary between a few seconds to several tens of thousands of seconds. More long dead time may also result in a temporal loss in the analysis stage, requiring more time than the actual measurement time. Long dead time samples undergo re-measurement after dilution to facilitate the analysis. As the prepared solution is also utilized in the nuclide separation processes, minimizing sample loss during dilution is crucial. Hence, predicting the possibility of dead time exceeding the target sample in advance and determining the corresponding dilution factor can prevent delays in the analysis process and the loss of samples due to dilution. In this study, to improve the issues related to gamma analysis, by using data generated during the analysis process, investigated methods to predict long dead time samples in advance and determining criteria for dilution factors. As a result of comparing the dead time data of 5% or long with the dose of the solution sample, it was concluded that analysis should be performed after dilution when it is about 0.4 μSv/h or high. However, some samples required dilution even at doses below 0.4 μSv/h. Also, re-measurement after dilution, the sample with a dead time of less than 32% was measured with less than 5% when diluted 10 times, and more than 32% required more than 10 times dilution. We suppose that with additional data collection for analyzing these samples in the future, if we can establish clearer criteria, we can predict long dead time samples in advance and solve the problem of analysis delay and sample loss.
        71.
        2023.05 구독 인증기관·개인회원 무료
        Measuring the concentration of corrosion products or nuclear fission products (FPs) in molten salts is crucial for pyroprocessing and molten salt reactors. Electrochemical analysis methods that can be performed in situ offer significant advantages for monitoring the concentration of corrosion products or FPs in molten salts. A microelectrode is an electrode with a length of several tens of micrometers on one side. The use of a microelectrode for electrochemical analysis has several advantages due to its small size, including rapidly reaching the limiting current regardless of the scan rate, immediate attainment of the limiting current upon applying an overpotential for instant monitoring within milliseconds, accurate measurement even in low convection systems, a small iR drop resulting from low flowing current and high signal accuracy, and high current density resulting in a high signal-tonoise ratio (SNR). Among various methods for making microelectrodes, techniques involving cutting a thin wire or using capillaries (such as the dual-bore capillary and pulled glass capillary methods) require precise manual skills and experience. Therefore, the results may vary depending on the maker’s skill level, and it can be difficult to control the electrode’s area, thickness, and surface uniformly. Recent research has focused on using semiconductor processes to fabricate microelectrodes, where CVD, metal sputtering, photolithography, and etching processes work together to deposit, refine, and shape the required material on a silicon wafer to create microelectrodes. However, the durability of microelectrodes produced this way is still low (usable for about 15-30 minutes), and there is no clear research on the degradation mechanism over time. To verify the proper operation of the fabricated microelectrodes, cyclic voltammetry (CV) is performed at various scan rates (from 10 mVs-1 to 2 Vs-1), and chronoamperometry (CA) is also examined to confirm whether the electrodes rapidly reach a steady-state current. After confirming their proper operation, CV is continuously measured until the microelectrodes are destroyed in a LiCl-KCl solution containing a small amount of FPs (Sm 340 mM) at 450°C. By observing changes in the electrical signal of the microelectrodes over time, the durability is evaluated, and the mechanism of performance degradation of the electrode is discovered. The experiment is then repeated by gradually increasing the temperature by 30°C from 450°C up to 600°C to observe the changes with temperature. This study provides basic information for future microelectrode experiments, and by diagnosing the cause of destruction, a more durable microelectrode structure can be manufactured.
        72.
        2023.05 구독 인증기관·개인회원 무료
        The amount of waste that contains or is contaminated with radionuclides is increasing gradually due to the use of radioactive material in various fields including the operation and decommissioning of nuclear facilities. Such radioactive waste should be safely managed until its disposal to protect public health and the environment. Predisposal management of radioactive waste covers all the steps in the management of radioactive waste from its generation up to disposal, including processing (pretreatment, treatment, and conditioning), storage, and transport. There could be a lot of strategies for the predisposal management of radioactive waste. In order to comply with safety requirements including Waste Acceptance Criteria (WAC) at the radioactive waste repository however, the optimal scenario must be derived. The type and form of waste, the radiation dose of workers and the public, the technical options, and the costs would be taken into account to determine the optimal one. The time required for each process affects the radiation dose and respective cost as well as those for the following procedures. In particular, the time of storing radioactive waste would have the highest impact because of the longest period which decreases the concentrations of radionuclides but increases the cost. There have been little studies reported on optimization reflecting variations of radiation dose and cost in predisposal management scenarios for radioactive waste. In this study, the optimal storage time of radioactive waste was estimated for several scenarios. In terms of the radiation dose, the cumulative collective dose was used as the parameter for each process. The cost was calculated considering the inflation rate and interest rate. Since the radiation dose and the cost should be interconvertible for optimization, the collective dose was converted into monetary value using the value so-called “alpha value” or “monetary value of Person-Sv”.
        73.
        2023.05 구독 인증기관·개인회원 무료
        Decommissioning plan of nuclear facilities require the radiological characterizations and the establishment of a decommissioning process that can ensure the safety and efficiency of the decommissioning workers. By utilizing the rapidly developed ICT technology, we have developed a technology that can acquire, analyze, and deliver information from the decommissioning work area to ensure the safety of decommissioning workers, optimize the decommissioning process, and actively respond to various decommissioning situations. The established a surveillance system that monitors nuclide inventory and radiation dose distribution at dismantling work area in real time and wireless transmits data for evaluation. Developed an evaluation program based on an evaluation model for optimizing the dismantling process by linking real-time measurement information. We developed a technology that can detect the location of dismantling workers in real time using stereovision cameras and artificial intelligence technology. The developed technology can be used for safety evaluation of dismantling workers and process optimization evaluation by linking the radionuclides inventory and dose distribution in dismantling work space of decommissioning nuclear power plant in the future.
        74.
        2023.05 구독 인증기관·개인회원 무료
        Radioactive products generated by long-term operation at NPP can become deposited on the surfaces of the system and equipment, leading to radiation exposure for workers during the decommissioning process. Chemical decontamination is one of the methods to reduce radiation exposure of workers, and there are HP CORD UV, CITROX, CAN-DECON. In the chemical decontamination process, organic acids are generally used, and representative organic acids include oxalic acid and citric acid. There are various methods for removing residual organic acid in decontamination liquid waste, such as using an oxidizing agent and an ion exchange methods. However, there is a problem in that oxidizing agent is used excessively or secondary wastes are generated in excess during organic waste treatment. However, when organic acid is decomposed using a UV lamp, the amount of secondary waste is reduced because it tis decomposed into CO2 and H2O. In this study, organic acid decomposition was evaluated as the contact time of the UV lamp. The experimental equipment consists of a UV reactor, a mixing tank, a circulation pump. The experimental conditions involved preparing 60 L of organic liquid waste containing oxalic acid, hydrogen peroxide and iron chloride. Test A was conducted using one UV reactor, and Test B was performed by connecting two UV reactors in series. As a result of the experiment, a decomposition rate of over 95% was shown after one hour for oxalic acid, and it was confirmed that the initial decomposition rate was faster as the contact time increases. Therefore, in order to increase the initial decomposition rate, it is necessary to increase the contact time of the UV lamp by connecting the UV reactors in series.
        75.
        2023.05 구독 인증기관·개인회원 무료
        During the decommissioning of a nuclear power plant, the structures must be dismantled to a disposal size. Thermal cutting methods are used to reduce metal structures to a disposal size. When metal is cut using thermal cutting methods, aerosols of 1 μm or less are generated. To protect workers from aerosols in the work environment during cutting, it is necessary to understand the characteristics of the aerosols generated during the cutting process. In this study, changes in aerosol characteristics in the working environment were observed during metal thermal cutting. The cutting was done using the plasma arc cutting method. To simulate the aerosols generated during metal cutting in the decommissioning of a nuclear power plant, a non-radioactive stainless steel plate with a thickness of 20 mm was cut. The cutting condition was set to plasma current: 80 A cutting speed: 100 mm/min. The aerosols generated during cutting were measured using a highresolution aerosol measurement device called HR-ELPI+ (Dekati®). The HR-ELPI+ is an instrument that can measure the range of aerodynamic diameter from 0.006 μm to 10 μm divided into 500 channels. Using the HR-ELPI+, the number concentration of aerosols generated during the cutting process was measured in real-time. We measured the aerosols generated during cutting at regular intervals from the beginning of cutting. The analyzed aerosol concentration increased almost 10 times, from 5.22×106 [1/cm3] at the start of cutting to 6.03×107 [1/cm3] at the end. To investigate the characteristics of the distribution, we calculated the Count Median Aerodynamic Diameter (CMAD), which showed that the overall diameter of the aerosol increased from 0.0848 μm at the start of cutting to 0.1247 μm at the end of the cutting. The calculation results were compared with the concentration by diameter over time. During the cutting process, particles with a diameter of 0.06 μm or smaller were continuously measured. In comparison, particles with a diameter of 0.2 μm or larger were found to increase in concentration after a certain time following the start of cutting. In addition, when the aerosol was measured after the cutting process had ended, particles with a diameter of 0.06 μm or less, which were measured during cutting, were hardly detected. These results show that the nucleation-sized aerosols are generated during the cutting process, which can explain the measurement of small particles at the beginning of cutting. In addition, it can be speculated that the generated aerosols undergo a process of growth by contact with the atmosphere. This study presents the results of real-time aerosol analysis during the plasma arc cutting of stainless steel. This study shows the generation of nucleation-sized particles at the beginning of the cutting process and the subsequent increase in the aerosol particle size over time at the worksite. The analysis results can characterize the size of aerosol particles that workers may inhale during the dismantling of nuclear power plants.
        76.
        2023.05 구독 인증기관·개인회원 무료
        Spent nuclear fuel (SNF) characterization is important in terms of nuclear safety and safeguards. Regardless of whether SNF is waste or energy resource, the International Atomic Energy Agency (IAEA) Specific Safety Guide-15 states that the storage requirements of SNF comply with IAEA General Safety Requirement Part 5 (GSR Part 5) for predisposal management of radioactive waste. GSR Part 5 requires a classifying and characterizing of radioactive waste at various steps of predisposal management. Accordingly, SNF fuel should be stored/handled as accurately characterized in the storage stage before permanent disposal. Appropriate characterization methods must exist to meet the above requirements. The characterization of SNF is basically performed through destructive analysis/non-destructive analysis in addition to the calculation based on the reactor operation history. Burnup, Initial enrichment, and Cooling time (BIC) are the primary identification targets for SNF fuel characterization, and the analysis mainly uses the correlation identified between the BIC set and the other SNF characteristics (e.g., Burnup - neutron emission rate) for characterizing. So further identification of the correlation among SNF characteristics will be the basis for proposing a new analysis method. Therefore, we aimed to simulate a SNF assembly with varying burnup, initial enrichment, and cooling time, then correlate other SNF properties with BIC sets, and identify correlations available for SNF characterization. In this study, the ‘CE 16×16’ type assembly was simulated using the SCALEORIGAMI code by changing the BIC set, and decay heat, radiation emission characteristics, and nuclide inventory of the assembly were calculated. After that, it was analyzed how these characteristics change according to the change in the BIC set. This study is expected to be the basic data for proposing new method for characterizing the SNF assembly of PWR.
        77.
        2023.05 구독 인증기관·개인회원 무료
        A sample size calculation algorithm was developed in a prototype version to select inspection samples in domestic bulk handling facilities. This algorithm determines sample sizes of three verification methods satisfying target detection probability for defected items corresponding to one significant quantity (8 kg of plutonium, 75 kg of uranium 235). In addition, instead of using the approximation equation-based algorithm presented in IAEA report, the sample size calculation algorithm based on hypergeometric density function capable of calculating an accurate non-detection probability is adopted. The algorithm based the exact equation evaluates non-detection probability more accurately than the existing algorithm based on the approximation equation, but there is a disadvantage that computation time is considerably longer than the existing algorithm due to the large amount of computational process. It is required to determine sample size within a few hours using laptop-level performance because sample size is generally calculated with an inspector’s portable laptop during inspection activity. Therefore, it is necessary to improve the calculation speed of the algorithm based on the exact equation. In this study, algorithm optimization was conducted to improve computation time. In order to determine optimal sample size, the initial sample size is calculated first, and the next step is to perform an iterative process by changing the sample size to find optimal result. Most of the computation time occurs in sample size optimization process performing iterative computation. First, a non-detection probability calculation algorithm according to the sample sizes of three verification methods was improved in the iterative calculation process for optimizing sample size. A computation time for each step within the algorithm was reviewed in detail, and improvement approaches were derived and applied to some areas that have major effects. In addition, the number of iterative process to find the optimal sample size was greatly reduced by applying the algorithm based on the bisection method. This method finds optimal value using a large interval at the beginning step and reduces the interval size whenever the number of repetitions increases, so the number of iterative process is less than the existing algorithm using unit interval size. Finally, the sample sizes were calculated for 219 example cases presented by the IAEA report to compare computation time. The existing algorithm took about 15 hours, but the improved algorithm took only about 41 minutes using high performance workstation (about 22 times faster). It also took 87 minutes for calculating the cases using a regular laptop. The improved algorithm through this study is expected to be able to apply the sample size determination process, which was performed based on the approximate equation due to the complexity and speed issues of the past calculation process, based on the accurate equation.
        78.
        2023.05 구독 인증기관·개인회원 무료
        Timely detection of nuclear activity is important for the management and supervision of nuclear materials, and inspections on North Korea as a method of safety measures for this monitoring system seem to be a not far future in the rapidly changing North Korea relationship. However, a simpler and more reliable monitoring device is required since the inspection period is limited and the possibility of revisiting is unclear. The seal is a simple but easily used security device for monitoring forgery and falsification in the IAEA. IAEA presents matters related to 1) project engineering, 2) remote monitoring, and 3) seal development as the three major tasks of the Department of safeguards and Division of Technical Support. The importance of development, modernization, and application of new seal devices is emphasized, and advanced sealing and identification system development is in progress at many research institutes such as JRC, ISCN, and JAEA. Since the existing seal devices used by IAEA and KINAC can only be confirmed through on-site inspections for damage, it is difficult to respond immediately in the event of similar situations such as theft of nuclear materials and loss of continuity of knowledge. Unlike facilities that comply with the requirements for safety measures, such as domestic nuclear facilities, in the case of facilities subject to denuclearization, it is very likely that various hazardous environments will exist that make it difficult to apply safety measures. Hence, a real-time seal device has developed through prior research due to the high possibility of situations in which Continuity of Knowledge (COK) is not maintained, such as damage, malfunction, and power loss of sealing and monitoring equipment. Through previous studies, the real-time seal device was loaded with server-based operating software and improved its performance by utilizing feedback from real users (KINAC) after use. In this study, the effectiveness of the previously developed sealing system was verified through performance evaluation, and the authentication of the equipment was secured through environmental tests.
        79.
        2023.05 KCI 등재 구독 인증기관 무료, 개인회원 유료
        생산적인 인문학기반 융합연구를 위해서는 방법론에 관한 고찰이 필요하다. 본고의 목적은 현상학과 인지 과학의 융합연구라는 맥락 안에서 이러한 방법론의 사례를 소개하고 평가하는 것이다. 본고는 에드문트 후설의 시간의식 현상학을 신경과학 연구에 선행적재하는 프란시스코 바렐라의 신경현상학 연구를 소개 하고, 이러한 방법론의 의의와 한계, 그리고 인문학기반 융합연구에 대한 시사점을 평가한다. 바렐라의 연구는 현상학의 현재장 분석, 즉 파지-근원인상-예지 구조를 동역학 이론을 매개로 뇌 활동이라는 생물 학적 기반과 연결한다. 이에 따르면 뇌 활동은 기초 규모, 통합 규모, 서사 규모라는 시간성의 세 규모에서 일어나는데, 특히 통합 규모가 현재장에 상응한다. 이러한 연구의 의의는 시간의식을 설명하는 매개를 제공한다는 점, 그리고 의식의 ‘어려운 문제’와 관련하여 ‘설명 간극’에 다리를 놓는다는 점이다. 그러나 이 연구의 한계는 ‘의식의 시간’을 가지고 ‘시간의 의식’을 설명하는 오류를 범한다는 점, 그리고 체험과 신경과정 사이의 동형성 이상을 제공하기 어렵다는 점이다. 나아가 이 연구가 인문학기반 융합연구에 대해 시사하는 점은 인문학이 자신의 고유한 이념과 방법론을 유지해야 한다는 점, 그리고 인문학이 과학 의 이론적 토대로서 기여할 수 있다는 점이다.
        6,100원
        80.
        2023.05 KCI 등재 구독 인증기관 무료, 개인회원 유료
        It is essential to determine a proper earthquake time history as a seismic load in a seismic design for a critical structure. In the code, a seismic load should satisfy a design response spectrum and include the characteristic of a target fault. The characteristic of a fault can be represented by a definition of a type of possible earthquake time history shape that occurred in a target fault. In this paper, the pseudo-basis function is proposed to be used to construct a specific type of earthquake, including the characteristic of a target fault. The pseudo-basis function is derived from analyzing the earthquake time history of specific fault harmonic wavelet transform. To show the feasibility of this method, the proposed method was applied to the faults causing the Gyeong-Ju ML5.8 and Pohang ML5.3 earthquakes.
        4,000원
        1 2 3 4 5