Page Not Found
Page not found. Your pixels are in another canvas. Read more
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas. Read more
About me Read more
This is a page not in th emain menu Read more
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Short description of portfolio item number 1 Read more
Short description of portfolio item number 2 Read more
Published in IEEE Transactions on Information Forensics and Security, 2020
This paper is about how to generate adversarial perturbation efficiently. Read more
Recommended citation: Zhang, H., Avrithis, Y., Furon, T., & Amsaleg, L. (2020). Walking on the edge: Fast, low-distortion adversarial examples. IEEE Transactions on Information Forensics and Security, 16, 701-713. https://ieeexplore.ieee.org/abstract/document/9186644
Published in EURASIP Journal on Information Security, 2020
This paper is about how to generate smooth adversarial perturbations. Read more
Recommended citation: Zhang, H., Avrithis, Y., Furon, T., & Amsaleg, L. (2020). Smooth adversarial examples. EURASIP Journal on Information Security, 2020, 1-12. https://link.springer.com/article/10.1186/s13635-020-00112-z
Published in Proceeding of the 38th AAAI Conference on Artificial Intelligence, 2024
This paper is about adversarial robustness on NeRF. Read more
Recommended citation: Jiang, W., Zhang, H., Wang, X., Guo, Z., & Wang, H. (2024, March). Nerfail: Neural radiance fields-based multiview adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 19, pp. 21197-21205). https://ojs.aaai.org/index.php/AAAI/article/view/30113
Published in Computer Vision and Image Understanding, 2024
This paper is about how to optimize saliency maps for interpretability. Read more
Recommended citation: Zhang, H., Torres, F., Sicre, R., Avrithis, Y., & Ayache, S. (2024). Opti-CAM: Optimizing saliency maps for interpretability. Computer Vision and Image Understanding, 248, 104101. https://www.sciencedirect.com/science/article/pii/S1077314224001826
Published:
I was invited to give a tutorial on adversarial attacks in deep learning for researchers attending the workshop. In this session, I introduced the background, motivation, and fundamental concepts of the problem before discussing class work on adversarial attacks and presenting state-of-the-art advancements, including our work on adversarial attacks on images. I also provided an overview of existing tools for running attacks. Read more
Published:
I was invited to give a talk on the interpretability problem in deep learning to the entire research team. In this session, I introduced the background, motivation, and fundamental concepts of the problem before presenting our work on optimizing saliency maps for improved interpretability. Read more
Published:
I was invited to give a talk on adversarial attacks in deep learning for bachelor’s students. In this session, I introduced the background, motivation, and fundamental concepts of the problem before discussing class work on adversarial attacks and presenting state-of-the-art advancements, including our work on adversarial attacks on images. Read more
Published:
I was invited to give a talk on adversarial attacks in 3D representation for master’s students. In this session, I introduced the background, motivation, and fundamental concepts of the problem before presenting our work on adversarial attacks targeting 3D point clouds and Neural Radiance Fields (NeRF). Read more
Published:
In this invited talk, we introduced the fundamentals of interpretability in neural networks, aiming to make the topic accessible to university students new to the field. We explored why interpretability is essential, discussed key methods for analyzing neural networks, and highlighted how these insights can pave the way for impactful research. The session aimed to inspire and equip students with the knowledge to embark on their AI research journey. Read more
Summer School, Southwest University, 2023
Open Course on Trusted Intelligent Algorithms in Intelligent Vehicles: Planning and Control. Read more
Master Seminar, Saarland University, 2024
This seminar course delves into the crucial and evolving field of explainability in machine learning (ML). As ML models become increasingly complex and integral to various domains, understanding how these models make decisions is essential. This course will explore different methodologies for interpreting ML models, including rule-based, attribution-based, example-based, prototype-based, hidden semantics-based, and counterfactual-based approaches. Through a combination of paper readings, discussions, and presentations, students will gain a comprehensive understanding of the challenges and advancements in making ML models transparent and interpretable. Read more