Candidate:
Filipa Marília Monteiro Ramos Ferreira
Date, time and location:
23 July 2025, 14:30, Sala de Atos, Faculty of Engineering, University of Porto
President of the Jury:
Carlos Miguel Ferraz Baquero-Moreno (PhD), Full Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto
Members:
Tiago Manuel Lourenço Azevedo (PhD), Associate Researcher, Department of Computer Science and Technology, University of Cambridge, United Kingdom;
Marco António Morais Veloso (PhD), Coordinating Professor, Department of Science and Technology, Oliveira do Hospital School of Technology and Management, Polytechnic Institute of Coimbra;
Luís Filipe Pinto de Almeida Teixeira (PhD), Associate Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto;
Rosaldo José Fernandes Rossetti (PhD), Full Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto (Supervisor).
Abstract:
Ensuring the reliability and robustness of deep learning remains a pressing challenge, particularly as neural networks gain traction in safety-critical applications. While extensive research has focused on improving accuracy across datasets, generalisation, interpretability and robustness in the deployment domain remain poorly understood. In fact, in real-world scenarios, models often underperform without clear explanations. Addressing these concerns, uncertainty quantification has emerged as a key research direction, offering deeper insight into neural networks and enhancing confidence, interpretability, and robustness. Among critical applications, self-driving vehicles stand out, where uncertainty-aware object detection can significantly improve perception and decision-making. This thesis explores interpretations of uncertainty tailored to object detection in the context of self-driving vehicles. In this sense, two novel methods to estimate the aleatoric component and one approach to modelling the epistemic uncertainty are proposed. Through the utilisation of anchor distributions readily available in any anchor-based object detector, uncertainty is estimated holistically while avoiding costly sampling procedures. Further, the concept of existence is introduced, a probability measure that indicates whether an object truly exists in the real-world, regardless of classification. Building upon these ideas, three applications of uncertainty and existence are explored, namely the Existence Map, the Uncertainty Map and the Existence Probability. Whilst the aforementioned maps encode the existence measure and the aleatoric uncertainty over the space of input samples, the Existence Probability merges the information provided by the Existence Map with the standard detections, supplementing model outputs. Evaluation showcases the coherence of uncertainty estimates and demonstrates the usefulness of the Existence and Uncertainty Map in supporting the standard model, providing open-set capabilities and giving a degree of confidence to true positives, false positives and false negatives. The merging strategy of the Existence Probability reports a considerable improvement in the performance of the object detector both in validation and perturbation, while detecting all classes of the dataset despite being trained only on cars, pedestrians and cyclists. The second part of this thesis features a study of the underspecification distribution and its connection with the epistemic uncertainty. Underspecification, recently coined, greatly endangers deep learning deployment in safety-critical systems as it depicts the variability of predictors generated by a single architecture with increasingly diverging performance in the application domain. The analysis performed showcases that, if the uncertainty estimates are correctly calibrated, a single predictor is sufficient to predict the spread of the underspecification distribution, avoiding running repeated costly training sessions. All proposed methods are designed to be model-agnostic, real-time compatible, and seamlessly applicable to deployed models without requiring retraining, underscoring their significance for robust and interpretable object detection in autonomous driving.