Spatio-temporal convolutional neural networks for video object detection

  1. Cores Costa, Daniel
Dirixida por:
  1. Victor Manuel Brea Sánchez Director
  2. Manuel Mucientes Molina Director

Universidade de defensa: Universidade de Santiago de Compostela

Fecha de defensa: 21 de novembro de 2022

Tribunal:
  1. Petia Radeva Ivanova Presidente/a
  2. M. J. Carreira Nouche Secretaria
  3. Krzysztof Slot Vogal

Tipo: Tese

Teseo: 764035 DIALNET

Resumo

The object detection problem is composed of two main tasks, object localization and object classification. The detection precision in images has greatly improved with the use of Deep Learning techniques, especially with the adoption of Convolutional Neural Networks. However, object detection in videos presents new challenges such as motion blur, out-of-focus or object occlusions that deteriorate object features in some specific frames. Moreover, traditional object detectors do not exploit spatio-temporal information that can be crucial to address these new challenges, boosting the detection precision. Hence, new object detection frameworks specifically designed for videos are needed to replicate the same success achieved in the single image domain. The availability of spatio-temporal information unlocks the possibility of analyzing long- and short-term relations among detections at different time steps. This highly improves the object classification precision in deteriorated frames in which a single image object detector would not be able to provide the correct object category. We propose new methods to establish these relations and aggregate information from different frames, proving through experimentation that they improve single image baseline and previous video object detectors. In addition, we also explore the utility of spatio-temporal information to reduce the number of training examples, keeping a competitive detection precision. Thus, this approach makes it possible to apply our proposal in domains in which training data is scarce and, also, it generally reduces the annotation costs