Vision-Based Autonomous Vehicle Control using the Two-Point Visual Driver Control Model

09/29/2019
by   Justin Zheng, et al.
0

This work proposes a new self-driving framework that uses a human driver control model, whose feature-input values are extracted from images using deep convolutional neural networks (CNNs). The development of image processing techniques using CNNs along with accelerated computing hardware has recently enabled real-time detection of these feature-input values. The use of human driver models can lead to more "natural" driving behavior of self-driving vehicles. Specifically, we use the well-known two-point visual driver control model as the controller, and we use a top-down lane cost map CNN and the YOLOv2 CNN to extract feature-input values. This framework relies exclusively on inputs from low-cost sensors like a monocular camera and wheel speed sensors. We experimentally validate the proposed framework on an outdoor track using a 1/5th-scale autonomous vehicle platform.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro