Skip to content

Anar Amirli — Portfolio

Portrait of Anar Amirli

Hi! I’m Anar Amirli, an applied AI scientist with experience in building interpretable and scalable machine learning systems. I recently completed my Master’s in Computer Science at Saarland University, advised by Prof. Antonio Krüger and Prof. Daniel Sonntag, where I developed a concept-based explainability framework for vision models.

Most recently, I worked as a Research Assistant at the German Research Center for AI (DFKI), contributing to projects ranging from AI models for medical imaging to real-time risk/anomaly detection systems for manufacturing and cloud-deployed ML workflows. I focus on developing AI systems that are practical and trustworthy in the real world.

Short CV

Industrial Training & Certifications

Selected Projects

2025
Concept-graph explainability thesis thumbnail
Beyond Heatmaps: A Visual Concept-Based Explainable Model via Graph Attention Networks
Anar Amirli

Explaining how black-box models make decisions is crucial for building trustworthy AI systems, especially in high-stakes domains like healthcare. Traditional attribution methods highlight where a model attends but not what it recognizes. Concept-based methods address this by linking predictions to human-interpretable concepts. This work introduces an ante-hoc explainability framework that combines non-negative matrix factorization (NMF) for unsupervised concept discovery with Graph Attention Networks (GATs) to model relationships between concepts, with a focus on medical imaging (i.e., skin cancer diagnosis).

Read more

While concept bottleneck models (CBMs) offer promise, they suffer from key limitations: the difficulty of defining clinically meaningful concepts, the high cost of annotations, reliance on heatmaps for localization, and potentially spurious alignment between visual features and textual labels. To avoid these pitfalls and leverage the strengths of vision models, we focus exclusively on visually grounded concepts. However, prior visually grounded methods often produce only global, class-specific explanations, neglect concept interactions, and providing unstable interpretability due to post-hoc nature.

Our framework addresses these issues by (a) discovering visual concepts with NMF, and (b) constructing concept graphs that capture interactions through a shallow GAT, balancing expressiveness and interpretability. Although our models do not outperform heavily optimized task-specific CBMs, they demonstrate consistent generalization across medical and standard datasets and, in some cases, surpass baseline CBMs, showcasing a more faithful, visually grounded alternative for explainability.

2022
Glass production anomaly detection: sensors and alerts thumbnail
Real-Time Multivariate Anomaly Detection and Fault Localization for Manufacturing Systems
Mina Ameli, Anar Amirli, Philipp Aaron Becker, Holger Bähring, Wolfgang Maaß

Real-time anomaly detection is critical in industrial settings to maintain quality and prevent costly failures. In this work, we study multivariate time series data from glass production and compare unsupervised detection and localization methods. Our two-level approach detects anomalies, categorizes their types, and localizes faulty sensors with the help of explainable AI techniques.

Read more

Experiments showed that combining statistical pattern recognition with multivariate anomaly detection pipelines significantly improves both accuracy and interpretability. By categorizing anomalies into distinct classes and highlighting faulty sensors, our method provides actionable insights for engineers monitoring production. This pipeline achieved promising results and demonstrates the value of integrating explainability into anomaly detection systems.

2022
Topology optimization generative AI designs thumbnail
From Multimodal Inputs to Optimized Designs: Generative AI for Topology Optimization in 2D and 3D
Yelaman Maksum, Anar Amirli, Yulong Ding, Alessandro Romagnoli, Samir Rustamov, Bakytzhan Akhmetov

This project reframes topology optimization as a multimodal-to-image translation task. We apply generative AI models, including GANs and diffusion models, to translate structured inputs into optimized 2D and 3D designs. The models consistently generate valid, high-quality structures at reduced computational cost.

Read more

Traditional topology optimization relies on iterative solvers and finite element methods, which are computationally expensive and time-consuming. By leveraging modern generative approaches, we accelerate the design process while preserving structural accuracy. Notably, performance improved at higher iteration levels, underscoring the potential of generative AI to reshape how engineers approach design optimization across multiple modalities.

2018
Prediction of Ball Location in Football Using Optical Tracking Data
Anar Amirli, Hande Alemdar

We proposed a machine learning–based method to predict the ball location in football when it is occluded, using only players’ spatial information from optical tracking data. Trained on 300 matches of the Turkish Super League, our neural network models achieved strong predictive accuracy (R² ≈ 79% for x-axis and 92% for y-axis). This approach can complement vision-based tracking systems and enable more reliable tracking in football.

Get in touch

I’m currently open to internships and applied research roles in AI, particularly in vision-language models, explainability, healthcare, and risk management. If you’d like to learn more about what I’m working on, the best way is to get in touch. I’d be happy to hear from you!