This study proposes a lightweight algorithm for real-time front-vehicle detection using low-resolution camera footage under various driving conditions. The proposed method first extracts driving lanes using Canny edge detection and the Hough transform, thus enabling efficient lane detection. A forward region of interest (ROI) is delineated based on the extracted lane geometry. Subsequently, YOLOv11 is employed to detect vehicles within each frame, where only those located inside the defined ROI are classified as preceding vehicles. To evaluate the applicability of the proposed method in diverse environments, its performance was assessed across six driving scenarios: normal driving, traffic congestion, complex structural environments, nighttime, tunnel sections, and sharp curves. Experimental results show that the proposed approach maintains a stable detection accuracy across different conditions while offering a low computational cost and a high processing speed. Compared with segmentation-based deep-learning lane-detection models, the proposed method demonstrates superior real-time capability and can operate using only a built-in monocular camera without relying on expensive sensors such as LiDAR, radar, or artificial markers. This study serves as a foundation for vision-based ADASs, front-vehicle-following control, and road-hazard detection systems.