Description
Learn how to detect obstacles in lidar point clouds by clustering and segmenting them, how to apply thresholds and filters to radar data in order to track objects accurately, and how to augment your perception by projecting camera images into three dimensions and fusing these projections with other sensor data. Combining sensor data with Kalman filters allows a vehicle to perceive its surroundings and track objects over time.
Syllabus:
Course 1: Lidar
Introduction to Lidar & Point Clouds
- Lidar data representation
- Working with a simulator to create PCD
- Visualizing Lidar data
Point Cloud Segmentation
- Using PCL to segment point clouds
- The RANSAC algorithm for planar model fitting
Clustering Obstacles
- Using PCL to cluster obstacles
- Using a KD-Tree to store point cloud data
- Implementing Euclidean Clustering to find clusters
- Applying bounding boxes around clusters
Working with Real Point Cloud Data (PCD)
- Working with real self-driving car PCD data
- Filtering PCD data
- Playing back multiple PCD files
- Applying point cloud processing to detect obstacles
Project: Lidar Obstacle Detection
To detect obstacles in a driving environment, filter, segment, and cluster real point cloud data.
Course 2: Radar
Introduction to Radar
- Handling real radar data
- Calculating object headings and velocities
- Determining the appropriate sensor specifications for a task
Radar Calibration
- Correcting radar data to account for radial velocity
- Filtering noise from real radar sensors
Radar Detection
- Thresholding radar signatures to eliminate false positives
- Predicting the location of occluded objects.
Project: Radar Obstacle Detection
To detect obstacles in real radar data, calibrate, threshold, and filter radar data.
Course 3: Camera
Sensor Fusion & Autonomous Driving
- Understanding the SAE levels of autonomy
- Comparing typical autonomous vehicle sensor sets including Tesla, Uber and Mercedes
- Comparing camera, lidar and radar using a set of industry- grade performance criteria
Camera Technology and Collision Detection
- Understanding how light forms digital images and which properties of the camera (e.g. aperture, focal length) affect this formation
- Manipulation images using the OpenCV computer vision library
- Designing a collision detection system based on motion models, lidar and camera measurements
Feature Tracking
- Detecting features from objects in a camera image using state-of-the-art detectors and standard methods
- Matching features between images to track objects over time using state-of-the-art binary descriptors
Camera and Lidar Fusion
- Projecting 3D lidar points into a camera sensor
- Using deep-learning to detect vehicles (and other objects) in camera images
- Creating a three-dimensional object from lidar and camera data
Project: Camera and Lidar Fusion
Using camera and lidar measurements, detect and track objects in 3D space from the benchmark KITTI dataset. Calculate and compare the time-to-collision based on both sensors.
Determine the best keypoint detector and descriptor combination for object tracking.
Course 4: Kalman Filters
Kalman Filters
- Constructing Kalman filters
- Merging data from multiple sources
- Improving tracking accuracy
- Reducing sensor noise
Lidar and Radar Fusion with Kalman Filters
- Building a Kalman Filter in C++
- Handling both radar and lidar data.
Extended Kalman Filters
- Predicting when non-linear motion will cause errors in a Kalman filter
- Programming an extended Kalman filter to cope with non-linear motion
- Constructing Jacobian matrices to support EKFs
Unscented Kalman Filters
- Estimating when highly nonlinear motion might break even an extended Kalman Filter
- Creating an unscented Kalman Filter to accurately track non-linear motion.
Project: Unscented Kalman Filters Project
Put your skills to the test! Code an Unscented Kalman Filter in C++ in order to track highly non-linear pedestrian and bicycle motion.