With abundant information collected by these cameras, there are few existing practices that automatically analyze and understand the content of the recording. By extracting information from the video, computer can better understand the driving condition and surrounding environment. The most common application is in the field of self-driving, where vehicle needs be able to detect objects and ground lane, and understanding traffic light as well as signs, to make decision about its direction and speed. For vehicle that are not self-driving but equipped with front-view cameras, it is still valuable to post process and analyze the recording by making inference on drivers decision based on detected object and other information learned from the recording, and potentially provide appropriate advice on drivers behavior.
This project focuses on three aspects of driving information:
Detecting road signs
Recognizing traffic light (exist or not and color)
Predicting driving speed and steering speed, and making prediction on state of the vehicle (e.g. Forward, Still, Turn Left)
This project uses image processing techniques to develop a basic “video interpreter” of vehicle’s front-view camera, which allows computer to understand some important information about the driving by “watching” what driver can see. In reality, there are much more advanced techniques that can perform the same tasks with much better accuracy. However, the road sign detection in this project is able to achieve an moderate accuracy. Comparably, traffic light detection is more robust, and velocity predictions are able to achieve a reasonable accuracy.