Robots have been widely used for the education in kindergarten and elementary school. In this study, the cognition of parents on robots in education is investigated. The study is conducted by analyzing responses of 105 parents with kindergarten students and elementary schoolchildren. The survey results show that most students who have been taking the robotics education start it fromfive or seven years old. The students were mainly educated in the private institution. Therefore, the parents worry about the lack of professionalism about educational institute and teachers. In conclusion, the systematic curriculums and policy of robotic education are needed for kindergarten students and elementary students.
In this paper, we propose and examine the feasibility of the robot-assisted behavioral intervention system so as to strengthen positive response of the children with autism spectrum disorder (ASD) for learning social skills. Based on well-known behavioral treatment protocols, the robot offers therapeutic training elements of eye contact and emotion reading respectively in child-robot interaction, and it subsequently accomplishes pre-allocated meaningful acts by estimating the level of children’s reactivity from reliable recognition modules, as a coping strategy. Furthermore, for the purpose of labor saving and attracting children’s interest, we implemented the robotic stimulation configuration with semi-autonomous actions capable of inducing intimacy and tension to children in instructional trials. From these configurations, by evaluating the ability of recognizing human activity as well as by showing improved reactivity for social training, we verified that the proposed system has some positive effects on social development, targeted for preschoolers who have a high functioning level.
The purpose of this study was to investigate how young children recognize the image of robots, and how they understand the relationship between themselves and robots based on school experience. 20 children from kindergarten A had no direct experience with educational robots, whereas 20 children from kindergarten B had experience in using educational robots in their classroom. Total 40 children from age group 5 class participated in this study. We collected data using interview and drawing test. The findings of the study are as follows: First, participating children recognized robots as having both the character of a machine and a human. But children with previous robot experience provided description of robots as a machine-tool. Both groups were not able to explain the structure of robots in details. Second, participating children understood that they can develop a range of social relationships with robots, including simple help to family replacement. There were mixed views on robots among the children with previous experience, but children with no experience described robots as taking the role of peers or family members. These findings could contribute to the development of robots and related programs in the field of early childhood education.
The purpose of this study was to examine the concept of r-learning based on existing studies of r-learning. It also aimed to analyze r-learning environments in an effort to determine prerequisites for the successful entrenchment of r-learning in material(technology and infrastructure), huma (young children and teacher) and institutional(law and policy) aspects. This study intended to suggest some of the right directions for the revitalization of r-learning. In conclusion, the position of r-learning and its interrelationship with related systems in the ecosystem of early childhood education should accurately be grasped to accelerate the integration of r-learning into kindergarten education to maximize the effects of the convergence of the two. Intensive efforts should be made from diverse angles to expedite the spread and enrichment of r-learning.
The main goal of e-learning systems is just-in-time knowledge acquisition. Rule-based elearning systems, however, suffer from the mesa effect and the cold start problem, which both result in low user acceptance. E-learning systems suffer a further drawback in rendering the implementation of a natural interface in humanoids difficult. To address these concerns, even exceptional questions of the learner must be answerable. This paper aims to propose a method that can understand the learner’s verbal cues and then intelligently explore additional domains of knowledge based on crowd data sources such as Wikipedia and social media, ultimately allowing for better answers in real-time. A prototype system was implemented using the NAO platform.
In this paper, we first point out the limitation of the existing methods in usability evaluation, and propose a new method to find tasks and users that can cause problems in the use of software applications. To deal with object-based applications, we define unit actions and divide a given task into the sequence of unit actions. Then, we propose a new measure to calculate the degree of lostness for the unit actions. In several experiments, we show the proposed method can represent the usability well while the previous method has a problem in object-based applications. We also find that the user’s evaluation is more related to the proposed method than the previous method based on execution time through correlation analysis.
In this paper, we proposed visual information to provide a highly maneuverable system for a tele-operator. The visual information image is bird’s eye view from UFR(Unmanned Flying Robot) shows around UGR(Unmanned Ground Robot). We need UGV detection and tracking method for UFR following UGR always. The proposed system uses TLD(Tracking Learning Detection) method to rapidly and robustly estimate the motion of the new detected UGR between consecutive frames. The TLD system trains an on-line UGR detector for the tracked UGR. The proposed system uses the extended Kalman filter in order to enhance the performance of the tracker. As a result, we provided the tele-operator with the visual information for convenient control.
This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-costcamera. SLAM is conducted by optimizing AGV’s explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.
Two novel parallel mechanisms (PMs) employing two or three PaPaRR subchains are suggested. Each of those two PMs has translational 3-DOF motion and employs only revolute joints such that they could be adequate for haptic devices requiring minimal frictions. The position analyses of those two PMs are conducted. The mobility analysis, the kinematic modeling, and singularity analysis of each of two PMs are performed employing the screw theory. Then through optimal kinematic design, each of two PMs has excellent kinematic characteristics as well as useful workspace size adequate for haptic applications. In particular, by applying an additional redundantly actuated joint to the 2-PaPaRR type PM which has a closed-form position solution, it is shown that all of its parallel singularities within reachable workspace are completely removed and that its kinematic characteristics are improved.