법무부 교정본부는 전 직원이 복잡한 행정환경에 적응하고 조직을 발전시키기 위해 조직 목표를 설정하여, 시대환경 변화에 대응하기 위하여 부단히 자기성찰과 노력을 해 왔다. 그러나 다른 일면에서는 환경변화에 빠르게 적응하지 못한다는 한계점이 있다는 비판이 있다. 이 연구의 목적은 Marquardt의 학습조직이론을 바탕으로 학습조직을 구성하는 학습역동성, 조직변화, 인적 임파워먼트, 지식관리, 첨단기술의 활용 등 하위체계 사이의 관계를 분석하여 교정공무원의 학습조직화 정도를 측정하고 학습조직화 정도가 낮은 요인을 분석하여 개선방안을 제시함으로써, 교정조직에도 학습조직을 구축하기 위한 필요성을 제기하고자 한다. 자료의 수집은 설문지를 사용하여 응답자가 스스로 기재해 넣는 방식(self-administered questionnaire)으로 진행되었다. 조사대상은 의정부 교도소에 재직중인 직원을 대상으로, 2014년 9월 1일부터 9월 14일까지 약 2주간에 걸쳐 설문조사를 실시하였다. 교정직원들의 정확한 인식정도를 파악하기 위해서는 교정조직의 전수조사가 이루어져야 하나, 시간적ㆍ경제적ㆍ지리적 여건상 의정부교도소를 중심으로 표본조사가 이루어졌다. 조사는 편의표집(convenience sampling)방식으로 조사대상자들에게 연구의 목적과 내용 그리고 취지를 설명한 후 설문지를 배포하여 진행되었다. 조사결과 총 243부의 설문지가 회수되었으나, 미기재 등 불성실하게 응답한 3개의 설문지를 제외하여 총 240부를 최종분석에 활용하였다. 연구 결과, 학습의 하위요소 중 인적 임파워먼트는 통계적으로 유의미하지 않은 것으로 나타났으며, 조직변화, 지식관리, 첨단기술의 활용이 통계적으로 유의미하게 영향을 미치는 것으로 나타났다. 가장 큰 영향력을 가지는 요소는 조직변화이며 다음으로 지식관리, 마지막으로 첨단기술의 활용으로 나타났다.
Underwater robotic vehicles(URVs) are used for various work assignments such as pipe-lining, inspection, data collection, drill support, hydrography mapping, construction, maintenance and repairing of undersea equipment, etc. As the use of such vehicles increases the development of vehicles having greater autonomy becomes highly desirable. The vehicle control system is one of the most critic vehicle subsystems to increase autonomy of the vehicle. The vehicle dynamics is nonlinear and time-varying. Hydrodynamic coefficients are often difficult to accurately estimate. It was also observed by experiments that the effect of electrically powered thruster dynamics on the vehicle become significant at low speed or stationkeeping. The conventional linear controller with fixed gains based on the simplified vehicle dynamics, such as PID, may not be able to handle these properties and result in poor performance. Therefore, it is desirable to have a control system with the capability of learning and adapting to the changes in the vehicle dynamics and operating parameters and providing desired performance. This paper presents an adaptive and learning control system which estimates a new set of parameters defined as combinations of unknown bounded constants of system parameter matrices, rather than system parameters. The control system is described with the proof of stability and the effect of unmodeled thruster dynamics on a single thruster vehicle system is also investigated.
As the development of autonomous vehicles becomes realistic, many automobile manufacturers and components producers aim to develop ‘completely autonomous driving’. ADAS (Advanced Driver Assistance Systems) which has been applied in automobile recently, supports the driver in controlling lane maintenance, speed and direction in a single lane based on limited road environment. Although technologies of obstacles avoidance on the obstacle environment have been developed, they concentrates on simple obstacle avoidances, not considering the control of the actual vehicle in the real situation which makes drivers feel unsafe from the sudden change of the wheel and the speed of the vehicle. In order to develop the ‘completely autonomous driving’ automobile which perceives the surrounding environment by itself and operates, ability of the vehicle should be enhanced in a way human driver does. In this sense, this paper intends to establish a strategy with which autonomous vehicles behave human-friendly based on vehicle dynamics through the reinforcement learning that is based on Q-learning, a type of machine learning. The obstacle avoidance reinforcement learning proceeded in 5 simulations. The reward rule has been set in the experiment so that the car can learn by itself with recurring events, allowing the experiment to have the similar environment to the one when humans drive. Driving Simulator has been used to verify results of the reinforcement learning. The ultimate goal of this study is to enable autonomous vehicles avoid obstacles in a human-friendly way when obstacles appear in their sight, using controlling methods that have previously been learned in various conditions through the reinforcement learning.