
Hi! I’m Anar Amirli, an applied AI scientist with experience in building interpretable and scalable machine learning systems. I recently completed my Master’s in Computer Science at Saarland University, advised by Prof. Antonio Krüger and Prof. Daniel Sonntag, where I developed a concept-based explainability framework for vision models.
Most recently, I worked as a Research Assistant at the German Research Center for AI (DFKI), contributing to projects ranging from AI models for medical imaging to real-time risk/anomaly detection systems for manufacturing and cloud-deployed ML workflows. I focus on developing AI systems that are practical and trustworthy in the real world.
Short CV
- 2025 Master’s Degree, Saarland University, Germany
- 2021–2025 Research Assistant, DFKI, Germany
- 2021–2022 Junior Applied Scientist, NTU Singapore, remote
- 2019 Bachelor’s Degree, Baku Engineering University, Azerbaijan
- 2019 Internship Semester, ATL Tech, Azerbaijan
- 2018 Summer Internship, Middle East Technical University, Turkey
Industrial Training & Certifications
- ongoing Agentic AI with LangChain and LangGraph (Coursera)
- ongoing MLOps Bootcamp: End-to-End ML Project Development (Udemy)
- 2025 Developing Machine Learning Solutions – AWS (Coursera)
- 2025 Generative AI with Large Language Models (Coursera)
Selected Projects

Explaining how black-box models make decisions is crucial for building trustworthy AI systems, especially in high-stakes domains like healthcare. Traditional attribution methods highlight where a model attends but not what it recognizes. Concept-based methods address this by linking predictions to human-interpretable concepts. This work introduces an ante-hoc explainability framework that combines non-negative matrix factorization (NMF) for unsupervised concept discovery with Graph Attention Networks (GATs) to model relationships between concepts, with a focus on medical imaging (i.e., skin cancer diagnosis).
Read more
While concept bottleneck models (CBMs) offer promise, they suffer from key limitations: the difficulty of defining clinically meaningful concepts, the high cost of annotations, reliance on heatmaps for localization, and potentially spurious alignment between visual features and textual labels. To avoid these pitfalls and leverage the strengths of vision models, we focus exclusively on visually grounded concepts. However, prior visually grounded methods often produce only global, class-specific explanations, neglect concept interactions, and providing unstable interpretability due to post-hoc nature.
Our framework addresses these issues by (a) discovering visual concepts with NMF, and (b) constructing concept graphs that capture interactions through a shallow GAT, balancing expressiveness and interpretability. Although our models do not outperform heavily optimized task-specific CBMs, they demonstrate consistent generalization across medical and standard datasets and, in some cases, surpass baseline CBMs, showcasing a more faithful, visually grounded alternative for explainability.

Real-time anomaly detection is critical in industrial settings to maintain quality and prevent costly failures. In this work, we study multivariate time series data from glass production and compare unsupervised detection and localization methods. Our two-level approach detects anomalies, categorizes their types, and localizes faulty sensors with the help of explainable AI techniques.
Read more
Experiments showed that combining statistical pattern recognition with multivariate anomaly detection pipelines significantly improves both accuracy and interpretability. By categorizing anomalies into distinct classes and highlighting faulty sensors, our method provides actionable insights for engineers monitoring production. This pipeline achieved promising results and demonstrates the value of integrating explainability into anomaly detection systems.

This project reframes topology optimization as a multimodal-to-image translation task. We apply generative AI models, including GANs and diffusion models, to translate structured inputs into optimized 2D and 3D designs. The models consistently generate valid, high-quality structures at reduced computational cost.
Read more
Traditional topology optimization relies on iterative solvers and finite element methods, which are computationally expensive and time-consuming. By leveraging modern generative approaches, we accelerate the design process while preserving structural accuracy. Notably, performance improved at higher iteration levels, underscoring the potential of generative AI to reshape how engineers approach design optimization across multiple modalities.
We proposed a machine learning–based method to predict the ball location in football when it is occluded, using only players’ spatial information from optical tracking data. Trained on 300 matches of the Turkish Super League, our neural network models achieved strong predictive accuracy (R² ≈ 79% for x-axis and 92% for y-axis). This approach can complement vision-based tracking systems and enable more reliable tracking in football.
Get in touch
I’m currently open to internships and applied research roles in AI, particularly in vision-language models, explainability, healthcare, and risk management. If you’d like to learn more about what I’m working on, the best way is to get in touch. I’d be happy to hear from you!