Eu2+/Dy3+-doped Sr2MgSi2O7 powders were synthesized using a solid-state reaction method with flux (NH4Cl). Thebroad photoluminescence (PL) excitation spectra of Sr2MgSi2O7:Eu2+ were assigned to the 4f7-4f65d transition of the Eu2+ ions,showing strong intensities in the range of 375 to 425nm. A single emission band was observed at 470nm, which was the resultof two overlapping subbands at 468 and 507nm owing to Eu(I) and Eu(II) sites. The strongest emission intensity ofSr2MgSi2O7:Eu2+ was obtained at the Eu concentration of 3mol%. This concentration quenching mechanism was attributableto dipole-dipole interaction. The Ba2+ substitution for Sr2+ caused a blue-shift of the emission band; this behavior was discussedby considering the differences in ionic size and covalence between Ba2+ and Sr2+. The effects of the Eu/Dy ratios on thephosphorescence of Sr2MgSi2O7:Eu2+/Dy3+ were investigated by measuring the decay time; the longest afterglow was obtainedfor 0.01Eu2+/0.03Dy3+.
In the midst of disaster, such as an earthquake or a nuclear radiation exposure area, there are huge risks to send human crews. Many robotic researchers have studied to send UGVs in order to replace human crews at dangerous environments. So far, two-dimensional camera information has been widely used for teleoperation of UGVs. Recently, three-dimensional information based teleoperations are attempted to compensate the limitations of camera information based teleoperation. In this paper, the 3D map information of indoor and outdoor environments reconstructed in real-time is utilized in the UGV teleoperation. Further, we apply the LTE communication technology to endure the stability of the teleoperation even under the deteriorate environment. The proposed teleoperation system is performed at explosive disposal missions and their feasibilities could be verified through completion of that missions using the UGV with the Explosive Ordnance Disposal (EOD) team of Busan Port Security Corporation.
This paper proposes a method to simultaneously estimate two degrees of freedom in wrist forces (extension - flexion, adduction - abduction) and one degree of freedom in grasping forces using Electromyography (EMG) signals of the forearms. To correlate the EMG signals with the forces, we applied a multi - layer perceptron(MLP), which is a machine learning method, and used the characteristics of the muscles constituting the forearm to generate learning data. Through the experiments, the similarity between the MLP target value and the estimated value was investigated by applying the coefficient of determination (R2) and root mean square error (RMSE) to evaluate the performance of the proposed method. As a result, the R2values with respect to the wrist flexionextension, adduction - abduction and grasping forces were 0.79, 0.73 and 0.78 and RMSE were 0.12, 0.17, 0.13 respectively.
Peg-in-hole assembly is the most representative task for a robot to perform under contact conditions. Various strategies for accomplishing the peg-in-hole task with a robot exist, but the existing strategies are not sufficiently practical to be used for various assembly tasks in a human environment because they require additional sensors or exclusive tools. In this paper, the peg-in-hole assembly experiment is performed with anthropomorphic hand arm robot without extra sensors or devices using “intuitive peg-in-hole strategy”. From this work, the probability of applying the peg-in-hole strategy to a common assembly task is verified.
Retrospective analysis was conducted on the clinical data of 38 cases of bacterial meningitis that were proven by cerebrospinal fluid culture. Each case occurred by GBS (68.4%), Pneumococcus (15.8%), E. coli (5.3%), Streptococcus mitis (5.3%), Streptococcus bovis (2.6%), and Staphylococcus xylosus (2.6%). Compared to 28 cases with normal outcome, 10 cases who died or had adverse outcomes at hospital discharge were more likely to present with coma, seizure (before or within admission, focal, status epilepticus), require pressor or ventilator support, have initial peripheral blood leukocyte count less than 4,000/mm3 or neutrophil count less than 1,000/mm3, and have hydrocephalus or cerebral infarction by brain imaging.
This paper focuses on a development of an anthropomorphic robot hand. Human hand is able to dexterously grasp and manipulate various objects with not accurate and sufficient, but inaccurate and scarce information of target objects. In order to realize the ability of human hand, we develop a robot hand and introduce a control scheme for stable grasping by using only kinematic information. The developed anthropomorphic robot hand, KITECH Hand, has one thumb and three fingers. Each of them has 4 DOF and a soft hemispherical finger tip for flexible opposition and rolling on object surfaces. In addition to a thumb and finger, it has a palm module composed the non-slip pad to prevent slip phenomena between the object and palm. The introduced control scheme is a quitely simple based on the principle of virtual work, which consists of transposed Jacobian, joint angular position, and velocity obtained by joint angle measurements. During interaction between the robot hand and an object, the developed robot hand shows compliant grasping motions by the back-drivable characteristics of equipped actuator modules. To validate the feasibility of the developed robot hand and introduced control scheme, collective experiments are carried out with the developed robot hand, KITECH Hand.
Reliable functionalities for autonomous navigation and object recognition/handling are key technologies to service robots for executing useful services in human environments. A considerable amount of research has been conducted to make the service robot perform these operations with its own sensors, actuators and a knowledge database. With all heavy sensors, actuators and a database, the robot could have performed the given tasks in a limited environment or showed the limited capabilities in a natural environment. With the new paradigms on robot technologies, we attempted to apply smart environments technologies-such as RFID, sensor network and wireless network- to robot functionalities for executing reliable services. In this paper, we introduce concepts of proposed smart environments based robot navigation and object recognition/handling method and present results on robot services. Even though our methods are different from existing robot technologies, successful implementation result on real applications shows the effectiveness of our approaches. Keywords:Smart Environments, Service Robot, Navigation
This paper introduces a prototype smart home environment that is built in the research building to demonstrate the feasibility of a robot-assisted future home environment. Localization, navigation, object recognition and handling are core functionalities that an intelligent service robot should provide. A huge amount of research effort has been made to make the service robot perform these functions with its own sensors, actuators and a knowledge base. With all complicated configuration of sensors, actuators and a database, the robot could only perform the given tasks in a predefined environment or show the limited capabilities in a natural environment. We started a smart home environment for service robots for simple service robots to provide reliable services by communicating with the environment through the wireless sensor networks. In this paper, we introduce various types of smart devices that are developed for assisting the robot in the environment by providing sensor and actuator capabilities. In addition, we present how the devices are integrated to constitute the smart home environment for service robots.