PhD Defense in Informatics Engineering: ”Onboard detection and guidance based on side scan sonar images for autonomous underwater vehicles”

Candidate: Martin Joseph Aubard

Date, time and location:
25 July 2025, 14:00, Sala de Atos DEEC – I-105, Faculty of Engineering, University of Porto

President of the Jury:
Pedro Nuno Ferreira da Rosa da Cruz Diniz (PhD), Full Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto

Members:
Bilal Wehbe, Senior Researcher at the German Research Center for Artificial Intelligence, Germany;
Catarina Helena Branco Simões da Silva, Associate Professor, Department of Computer Engineering, Faculty of Science and Technology, University of Coimbra;
Andry Maykol Gomes Pinto, Associate Professor, Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto;
Ana Maria Dias Madureira Pereira, Coordinating Professor with Aggregation, Department of Computer Engineering, Instituto Superior de Engenharia do Porto, Polytechnic of Porto (Supervisor).

The thesis was co-supervised by Luís Filipe Pinto de Almeida Teixeira (PhD), Associate Professor in the Department of Informatics Engineering at the Faculty of Engineering of the University of Porto.

Abstract:

This thesis addresses the challenge of improving Autonomous Underwater Vehicles (AUVs) onboard detection and interaction capabilities using Side-Scan Sonar (SSS) data. Traditionally, underwater missions relied on pre-defined plans where data are analyzed post-mission by operators or experts. This workflow is time-consuming, often requiring multiple missions to identify and localize underwater targets. The need for repeated missions increases operational costs and complexity, highlighting the inefficiency of current methodologies. Moreover, such approaches do not allow the AUV to interact with detected targets in real time, limiting the scope of mission adaptation and real-time decision-making. To overcome these limitations, this thesis presents a novel framework integrating deep learning models for object detection directly onboard AUVs. This integration enables the vehicle to detect, localize, and interact with underwater targets in real time, offering significant improvements over traditional post-mission analysis. The framework builds upon the LSTS toolchain, which is responsible for AUV motion control and communication, and introduces enhanced real-time data processing capabilities. However, implementing such a model into an embedded system suffers from computational limitations affecting the model’s performance. Thus, the knowledge distillation methods have been implemented, ensuring smaller, more efficient models to perform onboard detection without sacrificing accuracy. Additionally, to improve the model’s robustness against underwater noise, a novel adversarial retraining framework, ROSAR, is introduced, ensuring reliable operation even in noisy sonar environments. Following the onboard detection and localization enhancement, we focused on onboard interaction with the detected object. This is realized by extending the previous onboard framework and validating it through a customized simulator, enhancing interaction with the detected objects, and validating through a pipeline inspection use case, which reduces mission time by combining sonar detection and camera data collection in a single mission, utilizing behavior trees and safety-assessed models. Given the lack of open-source sonar datasets in the field, this thesis contributes to two novel publicly available side-scan sonar datasets, SWDD and Subpipe, which include field-collected data on walls and pipelines and are manually annotated for object detection. By shifting from post-mission analysis to real-time detection and interaction, this thesis significantly improves the operational efficiency of AUV missions. The proposed framework streamlines underwater operations and enhances AUVs’ autonomous behavior, relying on efficient, accurate, and robust object detection model for efficient underwater exploration and monitoring applications.

PhD Defense in Informatics Engineering : ”Uncertainty interpretations for the robustness of object detection in self-driving vehicles”

Candidate:
Filipa Marília Monteiro Ramos Ferreira

Date, time and location:
23 July 2025, 14:30, Sala de Atos, Faculty of Engineering, University of Porto

President of the Jury:
Carlos Miguel Ferraz Baquero-Moreno (PhD), Full Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto

Members:
Tiago Manuel Lourenço Azevedo (PhD), Associate Researcher, Department of Computer Science and Technology, University of Cambridge, United Kingdom;
Marco António Morais Veloso (PhD), Coordinating Professor, Department of Science and Technology, Oliveira do Hospital School of Technology and Management, Polytechnic Institute of Coimbra;
Luís Filipe Pinto de Almeida Teixeira (PhD), Associate Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto;
Rosaldo José Fernandes Rossetti (PhD), Full Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto (Supervisor).

Abstract:

Ensuring the reliability and robustness of deep learning remains a pressing challenge, particularly as neural networks gain traction in safety-critical applications. While extensive research has focused on improving accuracy across datasets, generalisation, interpretability and robustness in the deployment domain remain poorly understood. In fact, in real-world scenarios, models often underperform without clear explanations. Addressing these concerns, uncertainty quantification has emerged as a key research direction, offering deeper insight into neural networks and enhancing confidence, interpretability, and robustness. Among critical applications, self-driving vehicles stand out, where uncertainty-aware object detection can significantly improve perception and decision-making. This thesis explores interpretations of uncertainty tailored to object detection in the context of self-driving vehicles. In this sense, two novel methods to estimate the aleatoric component and one approach to modelling the epistemic uncertainty are proposed. Through the utilisation of anchor distributions readily available in any anchor-based object detector, uncertainty is estimated holistically while avoiding costly sampling procedures. Further, the concept of existence is introduced, a probability measure that indicates whether an object truly exists in the real-world, regardless of classification. Building upon these ideas, three applications of uncertainty and existence are explored, namely the Existence Map, the Uncertainty Map and the Existence Probability. Whilst the aforementioned maps encode the existence measure and the aleatoric uncertainty over the space of input samples, the Existence Probability merges the information provided by the Existence Map with the standard detections, supplementing model outputs. Evaluation showcases the coherence of uncertainty estimates and demonstrates the usefulness of the Existence and Uncertainty Map in supporting the standard model, providing open-set capabilities and giving a degree of confidence to true positives, false positives and false negatives. The merging strategy of the Existence Probability reports a considerable improvement in the performance of the object detector both in validation and perturbation, while detecting all classes of the dataset despite being trained only on cars, pedestrians and cyclists. The second part of this thesis features a study of the underspecification distribution and its connection with the epistemic uncertainty. Underspecification, recently coined, greatly endangers deep learning deployment in safety-critical systems as it depicts the variability of predictors generated by a single architecture with increasingly diverging performance in the application domain. The analysis performed showcases that, if the uncertainty estimates are correctly calibrated, a single predictor is sufficient to predict the spread of the underspecification distribution, avoiding running repeated costly training sessions. All proposed methods are designed to be model-agnostic, real-time compatible, and seamlessly applicable to deployed models without requiring retraining, underscoring their significance for robust and interpretable object detection in autonomous driving.

PhD Defense in Informatics Engineering: ”Aiding researchers making their computational experiments reproducible”

Candidate:
Lázaro Gabriel Barros da Costa

Date, Time and Location:
18th of July 2025, 16:00, Sala de Atos of the Faculty of Engineering of University of Porto.

President of the Jury:
Pedro Nuno Ferreira da Rosa da Cruz Diniz (PhD), Full Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto.

Members:
Tanu Malik (PhD), Associate Professor, Department of Electrical Engineering and Computer Science, University of Missouri, U.S.A;
Miguel Carlos Pacheco Afonso Goulão (PhD), Associate Professor, Department of Computer Science, Faculty of Science and Technology, New University of Lisbon;
Gabriel de Sousa Torcato David (PhD), Associate Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto;
Jácome Miguel Costa da Cunha (PhD), Associate Professor, Department of Informatics Engineering, Faculty of Engineering, University of Porto (Supervisor).

The thesis was co-supervised by Susana Alexandra Tavares Meneses Barbosa (PhD), Senior Researcher at INESCTEC Porto.

Abstract:

Scientific reproducibility and replicability are essential pillars of credible research, especially as computational experiments become increasingly prevalent across diverse scientific disciplines such as chemistry, climate science, and biology. Despite strong advocacy for Open Science and adherence to FAIR (Findable, Accessible, Interoperable, and Reusable) principles, achieving true reproducibility remains a formidable challenge for many researchers. Key issues such as complex dependency management, inadequate metadata, and the often cumbersome access to necessary code and data severely hamper reproducibility efforts. Moreover, existing reproducibility tools frequently offer piecemeal solutions that fail to address the multifaceted needs of diverse and complex experimental setups, particularly those that span multiple programming languages and involve intricate data systems. This thesis addresses these challenges by presenting a comprehensive framework designed to enhance computational reproducibility across a variety of scientific fields. Our approach involved a detailed systematic review of existing reproducibility tools to identify prevailing gaps and limitations in their design and functionality. This review underscored the fragmented nature of these tools, each supporting only aspects of the reproducibility process but none providing a holistic solution, particularly for experiments that require robust data handling or support for many programming languages.
To bridge these gaps, we introduced SCIREP, an innovative framework that automates essential aspects of the reproducibility workflow such as dependency management, containerization, and cross platform compatibility. This framework was rigorously evaluated using a curated dataset of computational experiments, achieving a reproducibility success rate of 94%.
Furthering the accessibility and usability of reproducible research, we developed SCICONV, a conversational interface that simplifies the configuration and execution of computational experiments using natural language processing. This interface significantly reduces the technical barriers traditionally associated with setting up reproducible studies, allowing researchers to interact with the system through simple, guided conversations. Evaluation results indicated that SCICONV successfully reproduced 83% of the experiments in our curated dataset with minimal user input, highlighting its potential to make reproducible research more accessible to a broader range of researchers.
Moreover, recognizing the critical role of user studies in evaluating tools, methodologies, and prototypes, particularly in software engineering and behavioral sciences, this thesis also extends into the realm of experimental tool evaluation. We conducted a thorough analysis of existing tools used for software engineering and behavioral science experiments, identifying and proposing specific features designed to enhance their functionality and ease of use for conducting user studies. These proposed features were validated through a survey involving the research community, confirming their relevance and the need for their integration into existing and future tools. The contributions of this thesis are manifold, encompassing the development of a classification framework for reproducibility tools, the creation of a standardized benchmark dataset for assessing tool efficacy, and the formulation of SCIREP and SCICONV to significantly advance the state-of-the-art in computational reproducibility. Looking forward, the research will focus on expanding the capabilities of reproducibility tools to support more complex scientific workflows, further enhancing user interfaces, and integrating additional functionalities to fully support user studies. By doing so, this work aims to pave the way for a more robust, accessible, and efficient computational reproducibility ecosystem that can meet the evolving needs of the global research community.

Keywords: Reproducibility; Replicability; Reusability; Computational Experiments; Conversational User Interface; User Studies.

PhD Defense in Digital Media: ”Mapping Multi-Meter Rhythm in the DFT: Towards a Rhythmic Affinity Space”

Candidate:
Diogo Miguel Filipe Cocharro

Date, time and location:
22nd of July 2025, 15:00, Sala de Atos of the Faculty of Engineering of University of Porto.

President of the Jury:
António Fernando Vasconcelos Cunha Castro Coelho (PhD), Associate Professor in the Department of Informatics Engineering at the Faculty of Engineering of the University of Porto.

Members:
Matt Chiu (PhD), Assistant Professor of Music Theory at the Conservatory of Performing Arts at the Baldwin Wallace University, EUA;
Daniel Gómez-Marín (PhD), Profesor del Departamento de Diseño e Innovación de la Escuela de Tecnología, Diseño e Innovación de la Facultad Barberi de Ingeniería, Diseño y Ciencias Aplicadas de la Universidad Icesi, Colombia;
Sofia Carmen Faria Maia Cavaco (PhD), Assistant Professor in the Department of Computer Science at the Faculty of Science and Technology of Universidade Nova de Lisboa;
Sérgio Reis Cunha (PhD), Assistant Professor in the Department of Electrical and Computer Engineering at the Faculty of Engineering of the University of Porto;
Gilberto Bernardes de Almeida (PhD), Assistant Professor in the Department of Informatics Engineering at the Faculty of Engineering of the University of Porto (Supervisor).

The thesis was co-supervised by Rui Luis Nogueira Penha (PhD), Coordinating Professor of ESMAE – School of Music and Performing Arts.

Abstract:

Music is inherently a temporal manifestation, and rhythm is a crucial component. While rhythm can exist without melody or harmony, the latter cannot exist without rhythm. However, rhythm is often understudied compared to harmony. Rhythmic affinity is a musical concept that describes the natural and pleasing relationship between two or more rhythmic patterns. This happens when these patterns, no matter how complex or seemingly unrelated, come together to create a sense of cohesion and flow rather than dissonance or conflict.
This affinity can arise from various factors, such as shared rhythmic motives, complementary and interlocking rhythmic structures, or a strong underlying pulse that unifies the different layers. For example, two complementary patterns that completely occupy the set of pulses in a cycle by filling each other’s silent pulses with their own active pulses are called interlocking rhythms. These interlocking rhythms are not limited to just the complementary nature of rhythms; we believe they can also be observed in patterns that feature coincident onsets or different underlying pulse grids. This diversity in rhythmic structures represents some of the musical properties we aim to explore in this study.
Music scholars have recently begun to explore affinity-related musical phenomena, particularly building on Harald Krebs’s seminal work on rhythmic dissonance, which offers a comprehensive framework for understanding and categorizing metric dissonance within music. Similarly, Godfried Toussaint’s research examines various methods for measuring rhythmic similarity and for analyzing and generating complementary and interlocking rhythms, providing insights into the structural interrelationships between different rhythmic patterns. Additionally, Clarence Barlow’s work on metrical affinities—often overlooked—contributes important perspectives on the relational characteristics between different meters.
We conducted preliminary experiments to assess the behavior of typical rhythmic similarity metrics across genres. Key findings revealed that similarity varies within a limited range across genres and instruments, which we identify as affinity space. This systematic analysis motivates the discussion and research on the concept of rhythmic affinity, emphasizing the need to understand it as a distinct concept from rhythmic similarity. Furthermore, we identified several limitations that draw this thesis’s main objectives and methodologies, namely the lack of metrics for multi-meter corpus analysis in the context of rhythmic cycles, e.g., loops.
In this context, this study focuses on preprocessing multi-meter representations of rhythmic patterns in the time domain specifically designed for projection in the Discrete Fourier Transform (DFT) space with the goal of exploring rhythmic affinities. We aimed to study the DFT of rhythmic loops towards a mathematical space that reflects metrical levels of alignment (or misalignment), which closely relates to Krebs definition of metric dissonance. This phenomenon relates to practices commonly found in musical composition, such as poly-meter and poly-rhythms, which enable the superimposing of rhythmic patterns that, in principle, show low similarity between each other but that are perceptually pleasing as a combined dissonance, the most known example is the hemiola of a three against two.
Our research follows and extends the body of music theory literature on applying the DFT of pitch classes to distances that reflect human perception and music-theoretical principles. Its application to rhythmic structures is currently limited to particular contexts of a musical piece, not encompassing strategies for multi-meter rhythmic analysis. The main contribution lies in a methodology for multi-meter analysis in the DFT space. Our findings demonstrated that up-sampling the grid of pivotal metrical levels underlying rhythmic pattern representations enables the simultaneous depiction of meters with simple and compound subdivisions. This approach highlights structural relationships within the DFT space, reflected by close distances between related simple and compound metrical templates—for instance, between $4/4$ and $12/8$ or $3/4$ and $6/8$. We implemented this methodology in a prototype system capable of generating rhythmic patterns based on metrical templates and sorting them according to their similarity to a user-defined pattern.

PhD Defence in Digital Media: “Integration of models for linked data in cultural heritage and contributions to the FAIR principles”

Candidate:
Inês Dias Koch

Date, Time and Location
1st of July 2025, 14:30, Sala de Atos da Faculdade de Engenharia da Universidade do Porto

Title:
“Integration of models for linked data in cultural heritage and contributions to the FAIR principles”

President of the Jury:
João Carlos Pascoal Faria (PhD), Full Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto.

Members:
Maja Žumer (PhD), Full Professor, Department of Library and Information Science of the University of Ljubljana, Slovenia;

María Poveda Villalón (PhD), Associate Professor, Departament of Artificial Intelligence of the Technical University of Madrid, Spain;

José Luís Brinquete Borbinha (PhD), Full Professor, Department of Computer Science and Engineering, Instituto Superior Técnico da Universidade de Lisboa;

Pedro Manuel Rangel Santos Henriques (PhD), Full Professor, Department of Informatics, Escola de Engenharia da Universidade do Minho;

Carla Alexandra Teixeira Lopes (PhD), Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto (Supervisor);

Mariana Curado Malta (PhD), Assistant Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto.

The thesis was co-supervised by Maria Cristina de Carvalho Alves Ribeiro (PhD), Retired Associate Professor in the Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto.

Abstract:

The various areas of Cultural Heritage, such as archives, museums, and libraries, have invested in tech-
nological development and innovation to make their resources available to users more efficiently and completely. To this end, the description of these resources is essential so that they are explained in
terms of their context and content, as well as to facilitate their intelligibility and accessibility. In this
sense, each area has begun to develop its own models and standards for describing the cultural objects
it deals with. This has made these standards sector-specific and only able to fulfil the information needs
within the area of knowledge they were developed, exploring only the information described within their
domain. As a result, linking resources from different information sources is challenging.
With the need to make the standards and models more interoperable, linked data models emerged in
Cultural Heritage. These models make it possible to link the various concepts from the different heritage
areas efficiently and effectively, considering the Semantic Web’s characteristics.
In Portugal, the Portuguese National Archives felt the need to develop a linked data model to describe
their cultural objects, which led to the creation of the EPISA Project, the project from which this research
emerged. Thus, this work aims to develop a linked data model to describe archival records, as well as to
connect them with other heritage domains, integrating them with existing linked data models, promoting
the access and reuse of data from heritage institutions based on the specialised description associated
with the cultural objects of these institutions. Additionally, it aims to link existing data models to data from other sources available on the Web, such as Wikidata and DBpedia.
We carry out a study that includes existing data models in Cultural Heritage, such as CIDOC CRM
in museums, RiC-CM in archives, and LRMoo in libraries, along with models that have emerged within
Web projects, such as DBpedia and Wikidata. By describing archival objects, as well as creating and
exploring relationships between other data models, this study identifies common characteristics and
principles, as well as the distinctive aspects of each area. Furthermore, it identifies the possibility of linking elements of the various models, ensuring that the models can be adapted to applications without losing the richness of the conceptualisation carried out in each of the domains.
In a context in which the Web promotes the explicitness of data semantics through the Semantic
Web and provides tools to represent it, it is necessary, on the one hand, to create links between models from different communities and, on the other, to adjust the complexity of each model to each application according to its specific requirements. The FAIR Principles (Findable, Accessible, Interoperable, Reusable) were therefore used as one of the sources for the requirements that data and metadata must fulfil to have a modular structure. We bring together a collection of use cases linked to archives users, including profiles ranging fromcollection managers to heritage promoters and informal users. In addition, we compile and evaluate a set of data modelling experiences using different models.
This work resulted in ArchOnto, a modular ontology that describes archive records. It was developed
considering existing archive standards and validated by experts in the field, specifically archivists from the Portuguese National Archives. ArchOnto is based on CIDOC CRM, combined with four other
specific ontologies also developed in this work.
The development of ArchOnto led to the creation of a prototype platform designed to explore and
manipulate archive records. Additionally, it offers the potential to apply this ontology to other domains, specifically to the representation of cinematographic records.

Keywords: Cultural Heritage; Linked Open Data; Data Integration; Semantic Web; FAIR Principles; Digital Humanities.

PhD Defence in Informatics Engineering: ”Towards Continuous Certification of Software Systems for Aerospace”

Candidate:
José Eduardo Ferreira Ribeiro

Date, Time and Location:
30th of June 2025, 14:30, Sala de Atos, Faculdade de Engenharia da Universidade do Porto

Title:
”Towards Continuous Certification of Software Systems for Aerospace”

President of the Jury:
Rui Filipe Lima Maranhão de Abreu (PhD), Full Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto.

Members:
Miguel Mira da Silva (PhD), Full Professor, Department of Computer Science and Engineering, Instituto Superior Técnico da Universidade de Lisboa;

João Miguel Lobo Fernandes (PhD), Full Professor, Departament of Informatics, Escola de Engenharia da Universidade do Minho;

João Carlos Pascoal Faria (PhD), Full Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto;

João Gabriel Monteiro de Carvalho e Silva (PhD), Full Professor, Department of Informatics Engineering, Faculdade de Ciências e Tecnologia da Universidade de Coimbra (Co-Supervisor).

The thesis was supervised by Ademar Manuel Teixeira de Aguiar (PhD), Associate Professor of the Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto.

Abstract:

Since the publication of the Agile Manifesto in 2001, Agile methods have evolved to become the dominant approach in software development across diverse domains. However, their adoption in safety-critical systems development, such as aerospace, remains limited for reasons usually attributed to the stringent regulatory safety requirements imposed by domain-specific standards. This dissertation explores the applicability of Agile methods within the context of safety-critical aerospace software development, specifically under the guidelines of the DO-178C standard, and concludes that, contrary to common belief, Agile methods can be effectively used also in this context. The DO-178C standard, titled Software Considerations in Airborne Systems and Equipment Certification, is the principal certification guideline for aviation software for agencies such as Federal Aviation Administration (FAA) and European Union Aviation Safety Agency (EASA).

A key observation from discussions with professionals across different organizations and industries with strong safety requirements, including space, aerospace, railway, automotive, energy, and defence, is the widespread perception that traditional methods like the Waterfall model are indispensable, if not mandatory, for compliance and successful certification. This perception derives from the rigorous safety-related evidence required for certification. In aerospace software development, the minimal adoption of Agile methods and practices is attributed to the demands of DO-178C, regarded as a restrictive standard. However, contrary to this belief, DO-178C does not mandate any specific development method but instead provides guidelines and objectives to achieve the necessary safety-related evidence. This flexibility opens the possibility for Agile methods to be adapted to meet certification requirements while offering their well-documented advantages of incremental delivery and adaptability to changing requirements.

This research examines whether Agile methods, particularly the Scrum framework, can be effectively integrated into the development of safety-critical aerospace software systems while maintaining full compliance with DO-178C. The study introduces Scrum4DO178C, a novel Agile-friendly process tailored to address the specific challenges of aerospace software development, including its extensive verification and validation (V&V) efforts. Through a comprehensive review of literature, industry practices and data, as well as real-world insights from an industrial case study involving a critical aerospace project (Software Level A – Catastrophic), the research evaluates the feasibility and benefits of this approach. The case study demonstrates that Scrum4DO178C improves project performance, enhances responsiveness to changing requirements and reduces V&V efforts, in comparison with Waterfall, while fully complying with DO-178C.

The findings challenge the prevailing notion that Agile is inherently incompatible with safety-critical domains and suggest that when adapted thoughtfully, Agile methods can complement the rigorous standards requirements like DO-178C. By bridging the gap between Agile methods, practices and safety-critical development, this work advocates for a paradigm shift in developing safety-critical software, promoting a more adaptive, customer-centric approach. Specifically, this research highlights Agile’s capacity to accelerate knowledge acquisition through shorter delivery cycles and feedback loops, improve traceability, and manage late-stage requirement changes more efficiently, also in the aerospace domain.
Building on this foundational work, ongoing efforts are underway to enhance the Scrum4DO178C process through automation, enabling the automatic generation and reuse of outputs required for DO-178C compliance. Additionally, future research will extend these concepts to other aerospace standards and safety-critical domains, ensuring their applicability and compliance across diverse regulatory frameworks. Supported by collaborative initiatives with universities (e.g Master’s thesis projects at the Faculty of Engineering, University of Porto (FEUP) and the Informatics Engineering Departament of the University of Coimbra (UC)) and industry partners, this research aims to reshape industry perceptions of Agile’s role in safety-critical systems, fostering innovation and adaptability in these complex environments.

Keywords: Agile; Aerospace; DO-178C; FAA; Safety-critical; Software development.

PhD Defence in Informatics Engineering: ”An Optimization Strategy for Resource Allocation in Cyber Physical Production Systems”

Candidate:

Eliseu Moura Pereira

Date, Time and Location:

17th of June 2025, 10:00, Sala de Atos, Faculdade de Engenharia da Universidade do Porto

President of the Jury:

Carlos Miguel Ferraz Baquero-Moreno, Full Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto.

Members:

Pedro Nicolau Faria da Fonseca (PhD), Assistant Professor, Department of Electronics, Telecommunications and Computer Science, Universidade de Aveiro;

Paulo Jorge Pinto Leitão (PhD), Principal Coordinating Professor, Department of Electrical Engineering, Instituto Politécnico de Bragança;

André Monteiro de Oliveira Restivo (PhD), Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto;

Gil Manuel Magalhães de Andrade Gonçalves (PhD), Associate Professor with Habilitation, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto (Supervisor).

The thesis was co-supervised by João Pedro Correia dos Reis (PhD), Assistant Researcher, Department of Electrical and Computer Engineering, Faculdade de Engenharia da Universidade do Porto.

Abstract:

Cyber-Physical Production Systems (CPPSs) integrate computation, communication, and control technologies, delivering the flexibility needed for dynamic shop floor reconfiguration and efficient manufacturing. A factory with higher shop floor flexibility has the advantage of higher product customization or changeover, when compared to traditional industries. They manifest this advantage mainly when the industry introduces a new product or because the shop floor produces highly variable products needing constant reconfiguration. Several manufacturers adopt such production philosophy, like in the automotive industry, where a high variability of car models and specifications requires different setups/configurations of the shop floor to manufacture them. In sequential production lines, like car assembly lines, reconfigurable CPPSs play an essential role because processing one product can affect the entire production performance, requiring a shop floor reconfiguration to optimize their execution. A significant challenge in CPPSs arises when reacting to changing conditions, such as new products or requirements, reconfiguration is needed. Current systems rely on manual intervention, leading to significant delays, especially in large industries where reprogramming hundreds of machines can take days or weeks. This thesis addresses this issue by proposing a platform that automatically optimizes software assignment to resources, speeding up development, deployment, and reconfiguration, enabling CPPSs to adapt to external disturbances quickly.

With the purpose of accelerating the development, reconfiguration, and execution of software in CPPSs, this thesis aims to optimize Function Blocks (FBs) assignment to the devices existent in an IEC 61499-based Cyber-Physical System (CPS), reducing the total execution time (reconfiguration time plus FB pipeline execution time). With this main goal, the thesis resulted in the development of 3 tools: 1) the Dynamic Intelligent Architecture for Software and Modular Reconfiguration (DINASORE), that enables the development, execution and manual reconfiguration of IEC 61499-based CPSs, 2) the Task Resources Estimator and Allocation Optimizer (TREAO),  that simulates and optimizes the tasks/FBs assignment to the CPS machines, recommending suitable software layouts for the CPS characteristics, and 3) the Task Assignment Optimization and Synchronization Engine (T-Sync), which integrates the previous two tools in a solution and optimizes in run-time the FBs assignment to the devices existent in an IEC 61499-based CPS.

Integrating these tools in T-Sync resulted in a differentiating solution because it 1) allows online FB assignment to optimize the CPS execution continuously and 2) improves the transparency and interoperability between FBs across IEC 61499-based devices. With this solution, the performance (total execution time) running FBs in reconfigurable CPSs improved by 30% in a simulated environment and 61% in a CPS. In addition to T-Sync improving total execution time, DINASORE enhances reconfiguration efficiency and flexibility, while TREAO streamlines CPS development by optimizing task and FB assignments to available resources. Besides the mentioned ones, during this thesis, other algorithms were implemented and tested for task assignment optimization, and other tools were developed to increase the interoperability and portability in CPSs. The future work envisions the automatic generation of FB pipelines from structured requirements, with formal specifications like UML diagrams, consequently integrating TREAO, manufacturing process simulators, and T-Sync to iteratively validate, optimize, simulate factory layouts, and deploy CPS software with enhanced flexibility and adaptability.

Keywords: Cyber-Physical Production Systems; IEC 61499; Machine Learning; Task Assignment.

PhD Defence in Informatics Engineering: ”Intelligent Ticket Management Assistant for Helpdesk Operations”

Candidate:

Leonardo da Silva Ferreira

Date, Time and Location:

13th of June 2025, 9:30, Sala de Atos, Faculdade de Engenharia da Universidade do Porto

President of the Jury:

Pedro Nuno Ferreira da Rosa da Cruz Diniz, PhD, Full Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto

Members:

Pedro Manuel Henriques da Cunha Abreu, PhD, Associate Professor with habilitation, Department of Informatics Engineering, Faculdade de Ciência e Tecnologia da Universidade de Coimbra;

Paulo Jorge Freitas de Oliveira Novais, PhD, Full Professor, Department of Computer Science, Escola de Engenharia da Universidade do Minho;

Carlos Manuel Milheiro de Oliveira Pinto Soares, PhD, Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto;

Ana Paula Cunha da Rocha, PhD, Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto;

Daniel Augusto Gama de Castro Silva, PhD, Assistant Professor, Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto (Supervisor).

The thesis was co-supervised by Professor Mikel Uriarte Itzazelaia, Associate Professor at the Escuela de Ingeniería de Bilbao, Universidad del País Vasco.

Abstract:

With the dynamic evolution of the internet, particularly in domains such as multimedia services, cloud computing, internet of things, virtualization, and artificial intelligence, companies have witnessed significant expansion in their market and services. However, this growth has also exposed numerous vulnerabilities that threaten the confidentiality, integrity, and availability of organizational and personal data. As information technology analysts work to address security system alerts, artificial intelligence has introduced new avenues for breaching security, ranging from simple, low-cost methods to highly sophisticated attacks. Low-cost approaches include phishing and password spraying, which exploit human error and weak password practices. In contrast, more complex threats include advanced persistent attacks and zero-day exploits, which require significant expertise and resources, often disrupting critical systems. Many organizations rely on cybersecurity helpdesk centers, internal or outsourced, to manage incidents. However, these centers often struggle to respond effectively due to data overload and a lack of qualified operators.

This dissertation addresses the shortage of skilled operators and the high volume of incidents in helpdesk operations by developing a ticket management assistant to support human operators in resolving incidents. The framework integrates a context-aware recommender system that identifies the fastest analyst-procedure pair for each incident and continually improves with each treatment followed. To ensure data privacy, this recommender system is trained using artificial data generated by a custom synthetic data generator. Furthermore, this thesis explores the possibility of enhancing this assistant with automated machine learning functionalities to predict incoming tickets. This feature could help managers anticipate workloads and proactively adjust the composition of the security teams.

The development of this framework is supported by the collaboration with a cybersecurity company, S21sec, which provides anonymized historical incident treatment data structures and taxonomies. However, synthetic data generation techniques are essential due to the absence of granular information on incident resolution and related parameters in the shared data set, which also requires privacy. The implemented generator builds artificial datasets that can mimic distributions similar to those observed in the real dataset while emulating real-world behaviours, including ticket prioritization, scheduling, and treatment.

The artificial data generator is evaluated for its efficiency in replicating real-world datasets using similarity measures such as Hellinger distance and Kullback-Leibler divergence. Furthermore, several ticket scheduling scenarios are explored, varying operators’ numbers and distribution across three work shifts. The results demonstrate that this framework can replicate ticket distributions and treatment durations observed in real datasets. Additionally, it allows for the simulation of real-world helpdesk operations, providing a solid foundation for exploring diverse operational contexts without compromising privacy. The analysis of the ticket scheduling consistently shows that scenarios characterized by a high shift imbalance and fewer operators lead to longer wait times and more tickets scheduled for later treatment.

The recommender system is assessed from two perspectives: scalability and impact on ticket treatment. The first phase uses various test datasets with different sizes and numbers of operators, analyzed with metrics such as the average recommendation time and memory usage. In contrast, the impact on ticket treatment is examined by considering improvements in ticket waiting times before being allocated to an operator and the response time required for their resolution, using different recommendation acceptance degrees. The results indicate that the number of operators the recommender system utilizes has a slightly larger impact on its scalability than the number of test tickets. Both features show a similar linear growth pattern regarding the referred metrics, but the number of operators has a larger slope. Integrating this recommender system into the ticket treatment reduced the average response time by 37.9\% to 45.1\% and the average wait time by 62.2\% to 63.2\%, assuming operators always accept the recommendations. With varying recommendation acceptance rates, the average wait time remains constant, while the response time improvement ranges from 0.4\% to 11.7\%.

The potential application of automated machine learning for predictive analysis is explored through a case study, comparing the system’s recommended team dimensionality decisions with expected outcomes. The case study evaluates the system based on prediction accuracy and its ability to suggest team size adjustments. Among the tested dataset distributions, models trained in three years of data outperformed those trained on four years, showing a better mean average error using real data on ticket frequency throughout the year. Regarding team dimensionality recommendations, including hiring or dismissing operators, the tool-based on automated machine learning frequently proposed decisions closely aligned with those that could have been proposed in the same period.

Collectively, these results show that the proposed framework can optimize ticket treatment workflows in real-world applications, leading to more efficient use of resources and reduced operational delays. Furthermore, its ability to simulate real-world operations without compromising privacy allows security operations centers to test several scenarios and refine their strategies.

Keywords: Helpdesk; Ticket; Cybersecurity; Synthetic Data; Recommendation Systems.

PhD Defence in Informatics Engineering: “Inmplode: A Framework to Interpret Multiple Related Rule-Based Models”

Candidate:

Pedro Rodrigo Caetano Strecht Ribeiro

Date, Time and Location:

13th of June 2025, 15:00, Sala de Atos, Faculdade de Engenharia, Universidade do Porto 

President of the Jury:

Rui Filipe Lima Maranhão de Abreu, PhD, Full Professor, Department of Informatics Engineering, Faculdade de Engenharia, Universidade do Porto 

Members:

Johannes Fürnkranz, PhD, Full Professor, Department of Computer Science of the Institute for Application-Oriented Knowledge Processing at the Johannes Kepler University Linz, Austria;

José María Alonso Moral, PhD, Full Professor, Department of Electronics and Computing, Escuela Técnica Superior de Ingeniería de la Universidad de Santiago de Compostela, Spain;

José Luís Cabral de Moura Borges, PhD, Associate Professor, Department of Industrial Engineering and Management, Faculdade de Engenharia, Universidade do Porto;

João Pedro Carvalho Leal Mendes Moreira, PhD, Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia, Universidade do Porto (Supervisor).

The thesis was co-supervised by Carlos Manuel Milheiro de Oliveira Pinto Soares, PhD, Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia, Universidade do Porto. 

Abstract:

This thesis investigates the challenges and opportunities presented by the increasing trend of using multiple specialized models, referred to as operational models, to address complex data analysis problems. While such an approach can enhance predictive performance for specific sub-problems, it often leads to fragmented knowledge and difficulties understanding overarching organizational phenomena. This research focuses on synthesizing the knowledge embedded within a collection of decision tree models chosen for their inherent interpretability and suitability for knowledge extraction. For example, a company with chain stores or a university with diverse programs, each using dedicated prediction models (sales or dropout, respectively). While these localized models are important, a global perspective is valuable organization-wide. However, managing many operational models, especially for cross-program/store analysis, can be overwhelming.

A methodology framed within a comprehensive framework is introduced to merge sets of operational models into consensus models. These consensus models are directed towards higher level decision-makers, enhancing the interpretability of knowledge generated by the operational models. The framework, named Inmplode, addresses common challenges in model merging and presents a highly customizable process. This process features a generic workflow and adaptable components, detailing alternative approaches for each subproblem encountered in the merging process.

The framework was applied to four public datasets from diverse business areas and a case study in education using data from the University of Porto. Different model merging approaches were explored in each case, illustrating various process instantiations. The model merging process revealed that the resulting consensus models are frequently incomplete, meaning they cannot cover the entire decision space, which can undermine their intended purpose. To address the issue of incompleteness, two novel methodologies are explored: one relies on the generation of synthetic datasets followed by decision tree training. At the same time, the other uses a specialized algorithm designed to construct a decision tree directly from aggregated (i.e., symbolic) data.

The effectiveness of these methodologies in generating complete consensus models from incomplete rule sets is evaluated across the five datasets. Empirical results demonstrate the feasibility of overcoming the incompleteness issue, contributing to knowledge synthesis and decision tree modeling. However, tradeoffs were identified between completeness and interpretability, predictive performance, and the fidelity of consensus models.

Overall, this research addresses a critical gap in the literature by providing a comprehensive framework for synthesizing knowledge from multiple decision tree models, focusing on overcoming the challenge of incompleteness. The conclusions have implications for organizations seeking to use specialized models while maintaining a holistic understanding of the analyzed phenomenon.

Keywords: interpretability; rule-based models; model merging framework; decision trees; completeness.

PhD Defense in Digital Media “Interaction methods for digital musical instruments: Application in personal devices”

Candidate:
Alexandre Resende Clément

Date, Time and Location:
5th of June 2025, 14:30, Sala de Atos, Faculdade de Engenharia, Universidade do Porto

President of the Jury:
António Fernando Vasconcelos Cunha Castro Coelho, PhD, Associate Professor with Habilitation, Department of Informatics Engineering, Faculdade de Engenharia, Universidade do Porto

Members:
Marcelo Mortensen Wanderley, PhD, Full Professor, Department of Music Research, Schulich School of Music, McGill University, Canadá;
Damián Keller, PhD, Associate Professor, Centro de Educação, Letras e Artes da Universidade Federal do Acre, Brasil;
Sofia Carmen Faria Maia Cavaco, PhD, Assistant Professor, Informatics Department, Faculdade de Ciências e Tecnologia, Universidade NOVA de Lisboa;
Rui Pedro Amaral Rodrigues, PhD, Associate Professor, Department of Informatics Engineering, Faculdade de Engenharia, Universidade do Porto;
Gilberto Bernardes de Almeida, PhD, Department of Informatics Engineering, Faculdade de Engenharia, Universidade do Porto (Supervisor).

Abstract:

“This thesis explores the potential of mobile handheld devices as tools for digital musical instrument interaction and participatory performance. Guided by the principles of ubiquitous music and intuitive interaction, the research investigates how mobile handheld devices can address challenges and unlock opportunities in contemporary music-making through participatory frameworks, gesture mappings, and multimodal feedback. Three experiments form the foundation of this study. The first describes and evaluates a system that enables large-scale audience participation in multimedia performances. It highlights the ability of mobile handheld devices to engage users and foster collaboration but reveals challenges in designing intuitive interactions for untrained participants. The second experiment examines how users instinctively map gestures to core musical parameters, such as pitch, duration, and amplitude, identifying natural trends – and the influence of musical training and experience on interaction strategies. The third focuses on evaluating the impact of multimodal feedback, combining auditory, visual, and haptic modalities, in note pitch tuning tasks.
The findings underscore the importance of designing standardised interaction guidelines and integrating multimodal feedback to make digital musical instruments more accessible and intuitive. Experiment 1 showed that the lack of a unified interaction model limited intuitive engagement, highlighting the need for standards that balance individual creativity with group intent. Experiment 2 found clear user preferences for gesture mappings of onset, pitch, and duration, shaped by cultural familiarity, and supporting context-aware design. Experiment 3 showed that while multimodal feedback had little immediate effect on accuracy, it improved user confidence and may aid long-term learning. This research advances the understanding of how mobile handheld devices can support participatory and creative music-making, contributing to the development of inclusive, user-friendly, and versatile musical tools.”