Talks and presentations

See a map of all the places I've given a talk!

Adversarial Robustness and Interpretability: Where Empirical ML Meets Formal Guarantees

February 12, 2026

Talk, KASTEL-KIT, (Karlsruher Institut für Technologie), Karlsruhe, Germany

In this talk, I will present my research on adversarial robustness and interpretability from an empirical machine learning perspective. Adversarial examples reveal systematic failure modes of neural networks and highlight the limitations of current evaluation practices, both for robustness and for explanation methods such as saliency maps. Across image classification, 3D perception, and physically grounded representations, adversarial analysis can be understood as a form of stress testing that produces concrete counterexamples to implicitly assumed system properties. Building on these observations, I will argue that many difficulties in assessing robustness, interpretability, and accountability stem from the absence of explicit specifications and formal guarantees. While empirical adversarial methods are effective at discovering violations, they do not by themselves clarify which properties should be satisfied or how they can be enforced. I will therefore discuss why formal methods, such as specification, verification, and counterexample-guided refinement, are a natural complement to adversarial machine learning, and outline potential directions for bridging these communities. The aim of the talk is to invite discussion on how empirical failure analysis can be integrated into workflows that support verifiable and accountable AI systems.

Building Trustworthy AI from the view of Adversarial Robustness and Explainability

June 04, 2025

Talk, ORAILIX, École Polytechnique, Paris, France

As artificial intelligence (AI) systems become increasingly integrated into safety-critical and high-stakes domains, ensuring their trustworthiness has emerged as a central research challenge. Two foundational pillars of trustworthy AI are adversarial robustness and explainability. Adversarial robustness addresses the vulnerability of machine learning models to carefully crafted perturbations that can cause erroneous or manipulated outputs, exposing critical weaknesses in reliability and security. Explainability, on the other hand, seeks to make AI systems transparent and interpretable, enabling stakeholders to understand, validate, and contest model decisions. In this presentation, I will introduce my research framed around these two core pillars.

Unveiling AI: Exploring Neural Networks and the Journey Towards Interpretability

September 20, 2024

Talk, Universität des Saarlandes, Saarbrücken, Germany

In this invited talk, we introduced the fundamentals of interpretability in neural networks, aiming to make the topic accessible to university students new to the field. We explored why interpretability is essential, discussed key methods for analyzing neural networks, and highlighted how these insights can pave the way for impactful research. The session aimed to inspire and equip students with the knowledge to embark on their AI research journey.

Adversarial Attack in 3D Representation

March 22, 2024

Talk, Tianjin University of Technology, Tianjin, China

I was invited to give a talk on adversarial attacks in 3D representation for master’s students. In this session, I introduced the background, motivation, and fundamental concepts of the problem before presenting our work on adversarial attacks targeting 3D point clouds and Neural Radiance Fields (NeRF).

Adversarial Robustness in Deep Learning

September 22, 2023

Talk, Chongqing Jiaotong University, Chongqing, China

I was invited to give a talk on adversarial attacks in deep learning for bachelor’s students. In this session, I introduced the background, motivation, and fundamental concepts of the problem before discussing class work on adversarial attacks and presenting state-of-the-art advancements, including our work on adversarial attacks on images.

A Quick Tour: Deep Learning in Adversarial Context

November 27, 2020

Totorial, Workshop I: Dependable Deep Learning} of Symposium on Dependable Software Engineering Theories, Tools and Applications (SETTA), Guangzhou, China

I was invited to give a tutorial on adversarial attacks in deep learning for researchers attending the workshop. In this session, I introduced the background, motivation, and fundamental concepts of the problem before discussing class work on adversarial attacks and presenting state-of-the-art advancements, including our work on adversarial attacks on images. I also provided an overview of existing tools for running attacks.