There are several technological and ethical challenges that undermine the trustworthiness of Machine Learning. One of the main challenges is the lack of robustness, which is an essential property to ensure that ML models are used in a secure way. Improving robustness is no easy task because the models are inherently susceptible to adversarial examples: data samples with subtle perturbations that cause unexpected behaviors. ML engineers and security practitioners still lack the knowledge and tools to prevent such disruptions, so adversarial examples pose a major threat to ML and to the intelligent systems that rely on it.
“Artificial Intelligence and Cybersecurity: The (lack of) security of Machine Learning models” will be presented by João Vitorino (ProDEI student), March 21, at 14:00, room I -115.
Short Bio:
João Vitorino is a researcher at GECAD, an R&D unit of ISEP, and a PhD candidate at FEUP, in the Doctoral Program in Informatics Engineering. He holds a Master’s degree in Artificial Intelligence Engineering, in addition to several certifications in the fields of AI and computer networking. He has collaborated with various companies and institutions in international R&D projects, and has been responsible for the conceptualization and development of AI solutions for several real-world cybersecurity applications.
The focus of his work has been adversarial robustness in complex tabular data domains. He has developed an intelligent method that performs realistic adversarial attacks, and training mechanisms that provide secure ML models for complex tasks like cyber-attack classification. João was awarded the “2023 Outstanding MSc Thesis Award”, by IEEE Portugal Section. His thesis “Realistic Adversarial Machine Learning to improve Network Intrusion Detection” analyses the robustness of machine learning algorithms and proposes the “AP2M – Adaptive Perturbation Pattern Method”.