유기 양이온과 유기/무기 음이온을 포함하고 있는 이온성 액체는 저온 용융 염의 종류이며 이산화탄소 분리 기능 에 대한 잠재력을 갖고 있다. 지구 온난화와 기후 변화의 문제점을 극복하기 위해 이온성 액체를 기반으로 한 막을 개발하여 연도가스에서 이산화탄소를 걸러내는 연구가 활발히 진행되고 있다. 본 리뷰에서는 홀로 설 수 있는 중합 이온성 액체(PIL), 이온성 액체와 이온성 액체 복합 막의 혼합의 기술이 논의될 것이다. 새로운 이온성 액체의 모노머 도입, 그리고 중합 이온성 액체 막과 복합 막의 미세구조변형은 막의 기계적 특성을 향상시켜 가스투과율과 선택도를 크게 향상 시키는데 시용되어 왔 다. 이온성 액체 모너머의 양이온과 음이온의 다양한 변형은 막의 가스 분리성에 큰 영향이 있다.
Urchin-structured zinc oxide(ZnO) nanorod(NR) gas sensors were successfully demonstrated on a polyimide(PI) substrate, using single wall carbon nanotubes(SWCNTs) as the electrode. The ZnO NRs were grown with ZnO shells arranged at regular intervals to form a network structure with maximized surface area. The high surface area and numerous junctions of the NR network structure was the key to excellent gas sensing performance. Moreover, the SWCNTs formed a junction barrier with the ZnO which further improved sensor characteristics. The fabricated urchin-structured ZnO NR gas sensors exhibited superior performance upon NO2 exposure with a stable response of 110, fast rise and decay times of 38 and 24 sec, respectively. Comparative analyses revealed that the high performance of the sensors was due to a combination of high surface area, numerous active junction points, and the use of the SWCNTs electrode. Furthermore, the urchin-structured ZnO NR gas sensors showed sustainable mechanical stability. Although degradation of the devices progressed during repeated flexibility tests, the sensors were still operational even after 10000 cycles of a bending test with a radius of curvature of 5 mm.
This paper presents a robotic system that provides telepresence to the visually impaired by combining real-time haptic rendering with multi-modal interaction. A virtual-proxy based haptic rendering process using a RGB-D sensor is developed and integrated into a unified framework for control and feedback for the telepresence robot. We discuss the challenging problem of presenting environmental perception to a user with visual impairments and our solution for multi-modal interaction. We also explain the experimental design and protocols, and results with human subjects with and without visual impairments. Discussion on the performance of our system and our future goals are presented toward the end.