Advanced Lane finding

The fourth project in the self-driving car course was a reiteration and improvement of the lane finding project that we did earlier.

The goal was to learn the very basics of computer vision. This includes learning about

  • camera calibration using a set of chessboard images, like this:

calibration image

  • distortion correction using the calculated distortion of the camera,
  • color thresholding, using various color spaces, to obtain a β€œbinary” image which only shows pixels which are likely part of a lane line, so that from the image below,

calibration image

we get an output like this one:

calibration image

  • perspective transformation for getting a birds-eye view of the road (so that lane lines are parallel in the transformed image),

calibration image

  • detecting lane line pixels on the transformed image using pixel-density histograms and the sliding-window method (identifying areas dense in pixels based on the lower half of the image, and moving upwards in a sliding method, to see where the lanes go)
  • determining the fane curvature and the car’s position relative to the center of the lane,
  • outputting a visugl estimation of the lane by warping back the calculated area to the original image. The final output looks something like this:

final image

Written on January 7, 2018

If you notice anything wrong with this post (factual error, rude tone, bad grammar, typo, etc.), and you feel like giving feedback, please do so by contacting me at samubalogh@gmail.com. Thank you!