nikolaosroufas@gmail.com inf2024146@ionio.gr
Curriculum Vitae
Google Scholar
GitHub
LinkedIn
I’m an undergraduate Computer Science student at the Ionian University, Department of Informatics, ranked among the top 3% of my cohort in AI- and programming-oriented modules. My academic and research interests lie in Neural Networks, Machine Learning, and Explainable AI (XAI), with a particular focus on making neural architectures more transparent, modular, and interpretable.
As part of the Erasmus+ exchange program, I’m currently studying at Sapienza University of Rome, focusing on Artificial Intelligence, Machine Learning, and Data Science. Alongside my coursework, I conduct research on Self-Explainable Neural Networks (SENN), evaluating transparency metrics and feature attribution methods for interpretable large-scale models.
My research is guided by a central question: How can we design neural systems that are both powerful and interpretable? I explore architectures that balance high performance with transparency—bridging deep learning with principles of interpretability, modularity, and human-aligned reasoning.
I work extensively with transformer-based architectures, semantic retrieval pipelines, and explainability methods such as layer-wise information decomposition (LID) and attention visualization. My projects span social discourse analysis, legal AI, and scientific text understanding, emphasizing reproducibility and responsible AI development.
I have co-authored peer-reviewed research published and presented at international venues including Springer, Frontiers in Artificial Intelligence, and IEEE. Becoming one of the youngest Greek researchers to present at AIAI 2025, I continue to develop efficient and explainable neural architectures that contribute to transparent and trustworthy AI. My long-term goal is to pursue a Ph.D. in Artificial Intelligence—potentially at institutions such as ETH Zurich—and advance the design of interpretable neural systems that bridge theory and application.
My work focuses on the design and analysis of neural architectures for interpretable AI. Current directions include:
Analyzing Public Discourse and Sentiment in Climate Change Discussions Using Transformer-Based Models
N. Roufas, A. Mohasseb, I. Karamitsos & A. Kanavos*
IFIP AIAI 2025 | paper
LegNER: A Domain-Adapted Transformer for Legal Named Entity Recognition and Text Anonymization
N. Roufas, I. Karamitsos, K. Al-Hussaeni & A. Kanavos*
Frontiers in Artificial Intelligence, 2025 | In press
Efficient Protein Folding with Transformer Models Using the Performer Architecture
N. Roufas, I. Karamitsos, K. Al-Hussaeni, V. C. Gerogiannis & A. Kanavos*
ICTA 2025 | In press
Do Deeper Layers Explain Better? An LID-Based Study of Transformer Explainability
N. Roufas, A. Kanavos, I. Karamitsos, K. Al-Hussaeni & M. Maragoudakis*
IEEE AdHD Big Data Workshop 2025 | In press
Last Update: October 2025