Rukmangadh Sai Myana 🎓

Rukmangadh Sai Myana

PhD Candidate

Florida International University

Professional Summary

I’m Rukmangadh Sai Myana, a Ph.D. researcher at Florida International University specializing in Explainable Artificial Intelligence (XAI). My work centers on designing transparent, interpretable, and causally grounded AI systems that people can understand and trust. With a strong mathematics foundation and a commitment to democratizing technology, I aim to bridge the gap between advanced AI models and human intuition, ensuring these tools remain accessible, responsible, and beneficial to society. I prioritize reproducibility and open science and welcome collaborations that translate theory into trustworthy, practical tools 😃

Education

PhD Computer Science

Florida International University

BTech Electrical Engineering

Indian Institute of Technology Bombay

Interests

Explainable AI Reinforcement Learning Machine Learning
📚 Research

My research develops principled, actionable, and broadly applicable AI explanations grounded in clear mathematics. The work investigates how explanations should behave across model families. From protein-folding systems like AlphaFold2 to vision transformers and multimodal models, I focus on designing tools for AI explainability and fairness.

This research combines rigorous theory (game-theoretic reasoning, sensitivity analysis) with empirical evaluation on realistic tasks and datasets. Prior work includes two publications on South Florida flood mitigation offering actionable guidance for planners. Recent efforts include a paper on the limits of XAI for AlphaFold2 and related architectures, and a preprint proposing a mathematically grounded framework for Vision Transformers that yields stable, faithful attributions. Throughout, the emphasis remains on theoretical soundness, real-world evaluation, open-source releases, and reproducibility.

The goal is to create impact along three dimensions: clarifying what explanation methods can guarantee; delivering practical tools and benchmarks for trustworthy deployment; and bringing these methods into real-world use in scientific and safety-critical domains. Future directions include extending methods to text and multimodal models, building standardized XAI evaluation suites, and releasing toolkits for robust explanations.

Publications
(2025). Explaining Protein Folding Networks Using Integrated Gradients and Attention Mechanisms. ICCABS, 2025.
DOI
(2025). Deep Learning Models for Water Stage Predictions in South Florida. Journal of Water Resources Planning and Management, 151(10).
DOI
(2023). Explainable Parallel RCNN with Novel Feature Representation for Time Series Forecasting. ECML-PKKD AALTD Workshop, 2023.
DOI