- Bee O'Leary
Giving Your Object Detection an ‘Edge’
Today, everything is captured on video: whether it’s the CCTV footage at a convenience store, videos posted daily to social media sites, or traffic footage on the news. 720,000 hours of data is uploaded daily to YouTube alone. The amount of data this enormous dataset provides is still being discovered, especially when combined with the ability to detect objects within an image.
Accuracy
One issue that object detection continues to face is accuracy. This is even harder when the object in question can show up in other areas of the image that are not of interest. As an example, a data scientist wants to look at the correlation between pedestrians walking in the street and pedestrian related accidents across multiple cities. Utilizing CCTV traffic cameras and object detection to detect people, this can easily be done. However, people would also be detected on the sidewalk, in shops, and through glass windows, like in the example picture below.

Edge Detection
A simple solution would be to have the data scientist go through and manually mark the boundaries for object detection in each camera’s view. This task becomes tedious and potentially problematic when more and more cameras are added with the average American city having 5,830 intersections [https://arxiv.org/pdf/1705.02198.pdf]. Thousands of cameras are too many to manually define, especially if any of these cameras get moved slightly, replaced, broken, etc.
In cases like this, when so many video feeds are involved, a preprocessing step using edge detection can identify the applicable area automatically, rather than having to manually set it yourself. By applying the edge detection, we can get an outline of all the background buildings and the edge of the road, as seen below.

By taking the largest contours and eliminating any too close to the top of the field of view, it leaves a nice mask of the road. Within this field of view, it’s shown there is a break in this mask where the bicyclist is, but this break can be rectified by connecting the two contours.

When people are detected in the algorithm, a conditional can be added in that says the person needs to extend at least below this mask to be considered on the road. With this condition, the resulting detection would be:

Conclusion
By utilizing edge detection once a day with these cameras, the interest area of the video feed can easily be defined. Instead of focusing all the processing power on the entire video feed screen, it can be limited to just the roads. This improves speed and processing time, giving the object detection algorithm an ‘edge.’
For more information, contact Presage Technologies. We would love to hear about your intended use case.