This research has been conducted to design upright parts of hand-made vehicles with the purpose of reducing material and machining cost while ensuring structural safety. Aluminum knuckles were modelled with three parts in order to enhance design flexibility as well as to reduce CNC machining cost. A vehicle model was constructed in CAD program and simulated in ADAMS View in order to estimate joint forces developing during 20 degree step steering condition at 60km/h. The joint forces obtained in the vehicle dynamics simulation were used for the structural analysis in ANSYS and dimensions of knuckle parts were adjusted until the lowest safety factor reached 2.0. The weight of knuckle decreased by 50% compared to the previous version that was designed without the structural analysis. The overall manufacturing cost decreased by 33% due to the reduction in the material as well as the CNC machining effort.
An automotive company have developed corporate requirements for vehicle characteristics for dynamic response which must be met before a product is delivered to the customer. To provide early predictions of vehicle handling performance, prior to the construction and testing of prototypes, it is necessary to predict dynamic behavior due to road inputs. This paper describes an application of the “virtual proving ground” approach for vehicle handling characteristics for a vehicle on proving ground road surfaces. I developed generation program of the virtual road profile for vehicle dynamics simulation.
자동차 개발 단계는, 본질적으로, 수차례의 개발 단계를 거치면서 설계 그룹들 간의 정보 전달과 교환이 여러 차례 반복적으로 발생하게 된다. 이러한 복잡한 상호 정보 교환 과정을 정확하게 이해하여 업무가 수행되지 못하면 불필요한 재작업을 야기시킬 수 있으며, 이는 비용 낭비 및 계획에 차질이 발생할 수 있다. 이 연구에서는 시스템 다이나믹 모델을 개발하여 빈번한 design-built-test 싸이클이 자동차 개발 단계에 미치는 영향 및 이점들을 고찰
Underwater robotic vehicles(URVs) are used for various work assignments such as pipe-lining, inspection, data collection, drill support, hydrography mapping, construction, maintenance and repairing of undersea equipment, etc. As the use of such vehicles increases the development of vehicles having greater autonomy becomes highly desirable. The vehicle control system is one of the most critic vehicle subsystems to increase autonomy of the vehicle. The vehicle dynamics is nonlinear and time-varying. Hydrodynamic coefficients are often difficult to accurately estimate. It was also observed by experiments that the effect of electrically powered thruster dynamics on the vehicle become significant at low speed or stationkeeping. The conventional linear controller with fixed gains based on the simplified vehicle dynamics, such as PID, may not be able to handle these properties and result in poor performance. Therefore, it is desirable to have a control system with the capability of learning and adapting to the changes in the vehicle dynamics and operating parameters and providing desired performance. This paper presents an adaptive and learning control system which estimates a new set of parameters defined as combinations of unknown bounded constants of system parameter matrices, rather than system parameters. The control system is described with the proof of stability and the effect of unmodeled thruster dynamics on a single thruster vehicle system is also investigated.
As the development of autonomous vehicles becomes realistic, many automobile manufacturers and components producers aim to develop ‘completely autonomous driving’. ADAS (Advanced Driver Assistance Systems) which has been applied in automobile recently, supports the driver in controlling lane maintenance, speed and direction in a single lane based on limited road environment. Although technologies of obstacles avoidance on the obstacle environment have been developed, they concentrates on simple obstacle avoidances, not considering the control of the actual vehicle in the real situation which makes drivers feel unsafe from the sudden change of the wheel and the speed of the vehicle. In order to develop the ‘completely autonomous driving’ automobile which perceives the surrounding environment by itself and operates, ability of the vehicle should be enhanced in a way human driver does. In this sense, this paper intends to establish a strategy with which autonomous vehicles behave human-friendly based on vehicle dynamics through the reinforcement learning that is based on Q-learning, a type of machine learning. The obstacle avoidance reinforcement learning proceeded in 5 simulations. The reward rule has been set in the experiment so that the car can learn by itself with recurring events, allowing the experiment to have the similar environment to the one when humans drive. Driving Simulator has been used to verify results of the reinforcement learning. The ultimate goal of this study is to enable autonomous vehicles avoid obstacles in a human-friendly way when obstacles appear in their sight, using controlling methods that have previously been learned in various conditions through the reinforcement learning.