Background: The Microsoft Kinect which is a low-cost gaming device has been studied as a promise clinical gait analysis tool having satisfactory reliability and validity. However, its accuracy is only guaranteed when it is properly positioned in front of a subject.
Objects: The purpose of this study was to identify the error when the Kinect was positioned at a 45˚ angle to the longitudinal walking plane compare with those when the Kinect was positioned in front of a subject.
Methods: Sixteen healthy adults performed two testing sessions consisting of walking toward and 45˚ obliquely the Kinect. Spatiotemporal outcome measures related to stride length, stride time, step length, step time and walking speed were examined. To assess the error between Kinect and 3D motion analysis systems, mean absolute errors (MAE) were determined and compared.
Conclusion: Based on our study experience, positioning the Kinect directly in front of the person walking towards it provides the optimal spatiotemporal data. Therefore, we concluded that the Kinect should be placed carefully and adequately in clinical settings.
본 연구에서는 건물 실내 공간 정보 획득을 위해 Microsoft사의 Kinect® v2를 활용한 point cloud 기법을 도입하였다. 카메라로 취득한 2차원의 투영 공간 이미지 픽셀 좌표를 각 카메라의 보정을 거쳐 3차원 이미지 변환하며 이를 토대로 공간 정보를 구현하였다. 기준점을 중심으로 360° 회전하여 취득한 3차원 이미지를 통해 거리 측정이 불가한 기존의 2차원 이미지의 한계를 개선하였으며, 이 과정을 통해 얻은 point cloud를 통해 3차원 map을 형성하였다. 형성된 3차원 map은 기존의 공간정보 융·복합을 위한 센서와 비슷한 수준의 측정 효율을 가지면서 동시에 렌즈 왜곡 현상에 대한 후처리 과정을 통해 공간 정보를 정확하게 측정할 수 있도록 하였다. 측정한 결과를 2D 도면과 실제 공간 및 구조부재의 길이 및 위치 등과 비교하여 검증하였다.
Three-dimensional(3D) display technique is widely used in our daily life. Especially, to product augmented game contents which can interact with users, it is necessary to get high quality resolution image data to reconstruct 3D model more exquisitely. In this paper, we tried to expand depth image captured by Kinect using various interpolation methods(nearest neighbor, bilinear, bicubic) to adapt it to the size of original Kinect color image. To measure the quality of expanded depth image compared to original depth image, we used PSNR(Peak Signal-to-noise ratio) index. Besides, we implemented GPU parallel processing algorithm with OpenCL to interpolate a large amount of image data rapidly. As a result of the experiment, a bicubic interpolation method made an accurate depth image although it had a long time.
국제 수로기구(IHO)에서는 해양분야에 범용으로 사용 가능한 국제 GIS 표준으로 S-100 표준규격을 채택하였다. 이에 따라 GIS 표준기술 기반의 차세대 항행정보 지원시스템에 대한 기술이 개발되고 있으며, 현재 CCTV 영상에 항행정보를 덧입혀 항행에 지원할 수 있는 증강현실 기반의 항행정보시스템이 개발되고 있다. 이에 본 연구에서는 이 시스템을 효과적으로 지원할 수 있는 방안으로 투명 디스플레이에서의 적용을 고려하였다. 투명 디스플레이 적용 시 시계확보를 위한 광각 렌즈사용으로 인한 영상왜곡, 사용자 위치에서의 CCTV 영상과 투명 디스플레이 투영 이미지의 불일치로 인하여 사용자가 실제 바라보는데 이질감이 발생하므로, 이를 해결하기 위한 시계정합 및 영상보정 방법에 대한 연구를 진행하였으며, 이를 프로토타입으로 개발하여 기술적용의 가능성을 입증하였다.
Global interest in smart-wear has risen rapidly in the 21stcentury. “Smart-wear” is one application of intelligent textiles and refers to all clothes made with intelligent textiles (or those that are a convergence). New developments represent a positive opportunity for the fashion industry to integrate new technologies to evolve. Smart-wear also includes wearable computers or digital clothing defined as “garment-integrated devices which augment the functionality of clothing, or which impart information-processing functionality to a garment”. The garment is an ideal interface medium between humans and electronic products due to interaction and technologies in the fashion industry. Smart-wear represents the future of both the textile/clothing industry and electronic industry.
Smart-wear for transformable garments allow the conversion of aesthetics and functionality into multiple looks and functions that satisfy various user needs and wants. Smart-wear offers a potential paradigm shift.
Precedent studies have focused on the role of transformation to understand the relationship and interaction between humans and new digital technologies (Petersen, Iversen, Krogh, & Ludvigsen, 2004).
Hussein Chalayan created aa transformer dress that can twitch and reconfigure. The long Victorian dress hemline contracts into a flapper style dress. Berzowska created dresses that use shape memory alloys to move and change in continuous motions (Ariyatum & Holland, 2003).Perocich used a pneumatic approach to lift garments and change the appearance of clothes (von Radziewsky, Krüger, & Löchtefeld, 2015).Lee & Kim(2014) built a shape-changing dress which apply fabric properties and illuminance sensor to fold pleats.
The idea of changing the overall appearance of clothes seems promising. Contemporary smart-wear has various functions that include sensing, actuating, powering, generating, storing, communicating, data processing and connecting. Technologies to develop digital applications can be easily controlled by smart-wear using an Arduino (Na & Cho, 2009).
An embedded system for using Arduino can be worn like clothing or an accessory that is a favorable for shop window display.
Shop window displays of fashion products have cultural consumption and fashioned identities that have developed into forms of art themselves and produce interesting imagery within fashion culture.
In recent decades store window displays have become a unique form of advertising and are the first point of contact between the shop and the shopper (Crewe, 2015).
The shop window display design might not instantly attract attention until the shopper realizes its interactive aspects. Such an interaction visually reveals a relationship between the store window and shopper's reaction.
In order to connect these shop window displays with an interactive fashion design, this paper aims to illustrate how these concepts fit into the prototype.
This paper develops a prototype of Wearable Shape-Changing (WSC) that deforms the fabric for pleat making on clothing for a store window dummy. Data processing is created by the motion of a shopper for the input functionality to discriminate between different shopper motions using the Microsoft Kinect sensor. A concealed Kinetic system scans every part of shopper’s joint for skeleton extraction when the shopper is outside the shop window. It is able to detect the shopper’s simple motion and simultaneously deliver information to the Arduino in the system. The prospective fashion display system needs to be devised based on a more serious technical method that utilizes information on the physical properties of fabrics to facilitate development in the store window. There has been some discussion on how fabrics could create foldable clothing items; in addition, a range folding techniques has been extended to e-textile due to useful characteristics (Perovich, Mothersill, & Farah, 2014).
The experiments performed in this paper allows observers to examine basic fabric characteristics and physical properties. The behavior changes during fold deformation and the recovery process as well as identifies correlations between stiffness and recovery rate.
As an experimental sample, this paper selects 2 types of fabric that have relatively stiff characteristics of a organza (one is 100% silk and the other is 100% polyester). The pleats type selects a diamond-pattern and the pleats finishing process employs a heat-setting method commonly used in the fashion industry.
The results were as follows: The Silk organza has 66 weight(g/㎡) and 0.17 nominal thickness (㎜) and the Polyester organza has 39.6 weight(g/㎡) and 0.11 nominal thickness (㎜). Both silk and polyester samples have the large stiffness value in the weft direction. Tensile properties resulted in similar values in both the warp and in the weft directions. Polyester has a great thermothermos plasticity, unique resilience, providing good pleats retention and crease recovery while silk has a low wrinkle recovery. However, silk has identical recovery rate in first and second elongation deformations for diamond-pattern pleats. The diamond-pattern also has a significant correlation with the warp and bias directions. Thus, folding composition should consider the directions of the fabric according to folding technique. Based on the experiment’s results among fabric samples’ physical properties of silk were chosen for the prototype.
In the prototype, the shop window displaying dummy wears a long dress, but it is designed to become shorter when the shopper lifts the arm. The mechanism by the operating design pulls the hemline in the front up to the lower thigh when the kinetic sensor detects motion.
As a means of visual communications or expression of the shopper’s mood, illuminance may be attached according to the shopper’s discretion. The advantage of the WSC dress compared to a traditional static dress is that the transforming shape occurs immediately by means of interaction.
Future studies, different approaches were proposed to clothing both hand and finger movements in a mobile environment. This paper focuses on a set of alliances between technology and fashion/textiles, with the WSC designed as an interface to be used for both purposes. This study represents a bridge between fashionable technologies and informative material properties. It represents a small first step from static dynamic fashion to dynamic interactive fashion.
The paper suggests Game Motion Library Tool that utilizes Kinect 2.0 to simplify pipeline of character motion production that fits into the game concept and to use saved motion data actively. In order to develop Game Motion Library Tool, motion library was constructed to be organize motion data with BVH file which was produced by using Max Script language that can enable FBX file and motion data collected through Kinect 2.0 to be used in 3ds Max, and then BVH file was brought to the 3ds Max to convert it to BIP file automatically and applied in characters.
Recently, education game contents with advanced mobile and Web technology have been widely researched. However, considering of interaction involved for children, it is necessary user friendly interfaces and interesting interactions. In addition, it is important to design to encourage the user’s learning motivation as a way to the effect of educational game contents. Therefore, in this study, we designed and implemented the educational game contents system using inter-working between KINECT camera, PC, and android platform based humanoid robot. The designed system consisted of KINECT system for capturing hand gestures, game contents system for playing games, and android humanoid robot system for interacting with users. This system can lead to interests and willingness of users through handy interaction such as hand gestures and interesting quiz game contents. To improve effectiveness and userability, we plant to advance designed educational game contents system that can be operated on a smart TV system including a built-in camera as a further study.
We present a Microsoft Kinect-based hand recognition algorithm for an interactive image clipping system, which is widely used for environments such as public facilities and security environments where personal capturing devices including mobile phones are not allowed. User-friendly interface and accurate image capturing function are required for an image clipping system. We build the system by combining Microsoft Kinect, HD webcam and projector. The Kinect and webcam are used to capture the motions of users' hand and project is to display the user-selected area from the capturing material. Hand recognition is composed of three steps: (i) the region occupied by users' hand is extracted from an image, (ii) the fingertips of the extracted hand region are analyzed using k-curvature algorithm, and (iii) the height of the fingertip is estimated using the depth image from Kinect. The height of the fingertip informs whether users' finger touched the surface of the target. The region captured by the fingertip is clipped from the image and stored as the target image. The excellence of our hand recognition algorithm is proved through a user test.
Creating avatar animations are tedious and time-consuming task since the desired avatar poses should be specified for each of a large number of keyframes. This paper proposes a fast and handy method to create game character animation contents using the motion data captured from the Kinect sensor. A Kinect sensor captures and saves the human motion. The Kinect sensor provides the motion information in a simple form of coordinates of joint positions. Using the captured motion data we determine the set of bone transforms that makes up the human skeletal animation data. The animation data is utilized to determine the position of all the bones at the current time in the animation. For experimental purpose we create a simple avatar character. We express the character model by the MD5 format, in which the mesh data and animation data are separated. A set of twenty joint positions reflect a snapshot of the character pose. The sets are used to evaluate the bone transform matrices and construct our skeletal animation scheme. We verified our method by appling the captured Kinect motion data to character animation. Our approach provides an easy method for creating avatar animations.
Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses depth information of Microsoft Kinect. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera. The research showed naturally composed stereo 3D images as well as the feasibility of applying stereoscopic 3D mixed reality image technology.
We generated a high resolution 3D object using a commercial DSLR camera and the low depth information of Kinect. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. The experimental result showed that it is possible to create high resolution 3D object from the depth camera. This paper is expected to be applied to reality game graphic.
본 연구는 노인의 신체적·정신적 건강 향상을 위한 기능성 게임인 ‘팔도강산4’의 개발과 사용 성 검사에 대한 것이다. 2013년 개발한 ‘팔도강산3’에 이은 다음 버전으로서 ‘키넥트’가 단종되면 서 ‘키넥트2’를 사용해야 했으며, 개인 아이디를 도입하여 개인별 운동 데이터를 축적함으로써 매 플레이마다 자기의 이전 평균과 비교하여 바로 성과를 알 수 있도록 하였다. 또한 과거의 장터 배경에 실제 마트 배경을 추가하였고, 개인이 숙지할 수 있도록 튜토리얼도 추가하였다. 2016 PlayX4에서 전시하였으며, 40명의 사용성에 대한 설문조사 결과를 분석한 바, 전작에 이 어 노인들에게 유익함의 가능성을 볼 수 있었다.
본 논문에서는 차세대 게임 인터페이스를 활용한 기능성 런닝게임을 제시한다. 게임은 2인용 대 결방식으로 개발하였으며, 방식은 다음과 같이 나뉜다. 첫째, 뇌파측정기를 이용하여 사용자1의 감정 변화, 집중력의 강도, 안면의 움직임을 측정한다. 둘째, 모션 캡쳐 장치인 키넥트를 이용하여 사용자 2의 달리기, 좌우 방향전환, 점프 등의 모션을 측정한다. 본 게임의 핵심은 사용자의 뇌파 그래프가 실시간으로 게임내의 맵의 형태로 표현되는 것이다. 사용자는 자신의 뇌파가 게임 내의 맵으로 시 각화되는 것을 보며 뇌파를 스스로 조절하는 학습능력을 배울 수 있다. 따라서 사용자1은 이러한 기능성 게임을 플레이하면서 뇌의 활성화를 통한 집중력 향상을 꾀할 수 있고, 사용자2는 실제로 달리고 점프하는 동작을 통한 운동능력을 증진하는 효과를 얻을 수 있다.
동작 인식 게임용으로 개발된 키넥트는 2011년 SDK가 공개되면서 게임뿐 아니라 과학, 교육, 의료 등 다양한 분야에서 활용되고 있다. 키넥트는 사용자 움직임에 대한 반응 속도가 늦고, 인식 데이터들에 노이즈가 많으며, 신체 일부분이 가려지면 추적할 수 없다는 문제점을 안고 있다. 이러한 플레이 환경의 특수성으로 대부분 게임에서는 키넥트 설치 위치나, 자세 교정과 같은 불편함을 요구한다. 본 연구에서는 키넥트를 이용한 게임에서 플레이어에 대한 요구 사항을 최소화하면서 노이즈와 같은 예외상황에 대처하고 일관된 동작 처리가 가능한 인터페이스 설계 방법을 제안하며, 실험을 통해 키넥트를 이용한 게임 개발 시 고려해야 할 지연 시간을 제시한다.
This paper presents a method of improving the pose recognition accuracy of objects by using Kinect sensor. First, by using the SURF algorithm, which is one of the most widely used local features point algorithms, we modify inner parameters of the algorithm for efficient object recognition. The proposed method is adjusting the distance between the box filter, modifying Hessian matrix, and eliminating improper key points. In the second, the object orientation is estimated based on the homography. Finally the novel approach of Auto-scaling method is proposed to improve accuracy of object pose estimation. The proposed algorithm is experimentally tested with objects in the plane and its effectiveness is validated.
스마트폰에서 시작한 혁신적인 사용자 인터페이스는 사람의 자연스러운 동작을 인식하는 NUI(Natural User Interface)로 발전하여 앞으로 모든 가전제품의 기본적인 사용자 경험으로 자리 잡을 것으로 예상된다. 본 연구는 교육용 Java 개발도구인 Greenfoot과 마이크로소프트 Kinect 센서를 이용하여 NUI 프로그래밍을 지도하는 방법을 사례 중심으로 모색하였다. 본 연구에 참여한 학생들을 대상으로 총 30차시를 지도한 후 NUI에 대한 이해와 분석, 한계점 인식, 그리고 NUI 프로그램 제작에 대한 자기효능감에 대해 1:1 심층 면담을 실시하였다. 연구결과, 중 고등학생들도 어렵지 않게 고품질의 NUI 프로그램을 제작할 수 있고, 이 과정에서 객체지향 프로그래밍의 개념과 혁신적인 사용자 인터페이스에 대한 가능성을 탐색하였으며, NUI 프로그래밍에 대해 긍정적으로 인식하고 있음을 확인할 수 있었다.
본 연구진에서 개발한 ‘팔도강산3’은 노인용 기능성게임으로서 2012년에 개발된 ‘팔도강산2’ (손잡이/발판을 이용한 기억력 증진 걷기게임)의 후속 작이며, 팔걸이와 발판 인터페이스를 없애고 대신 키넥트를 이용하여 동작인식을 통해 인터페이스를 가능하게 했다. 기대되는 효과는 기억력 증진, 시야 속의 물품을 찾아야하는 집중력 향상, 걷기 운동을 통한 활력감 증진 등이며, 게임플레이의 여가 활동을 통한 정신 건강의 향상을 목표하고 있다. 천안시 노인종합복지관에서 65세 이상 고령자들 25명을 대상으로 2주간의 단기 실험을 하였으며, 기억력과 신체 운동, 정신 건강 영역 중 자아존중감 수준에서 유의미하게 향상된 결과를 획득하였다.