Research
My research lie in Natural Language Processing (NLP), with a focus on neuro-symbolic approaches, improving interpretability and alignment in large language models (LLMs), and meta-reasoning in LLMs.
Current Projects
NeuSyM-Meta: Neuro-Symbolic Meta-Reasoning for Self-Aware Logical Evaluation in Large Language Models
Exploring neuro-symbolic methods for meta-reasoning that will enable large language models (LLMs) to evaluate and improve their own reasoning processes.
AI Misalignment and Scheming Evaluation Framework
Studying AI misalignment with a focus on AI scheming, and developing benchmarks to evaluate and detect covert deceptive behaviors in advanced AI systems.
Recent Projects (2024-2025)
-
Assessing Algorithmic Bias in Language-Based Depression Detection - Spring 2025 (Paper Accepted at BHI 2025 Conference)
Investigating gender and racial disparities in DNN vs LLM approaches for depression detection, with fairness-aware mitigation strategies -
Structured Reasoning with LLMs for Question Answering over Tabular Data - Spring 2025
SemEval-2025 Task 8 system using hybrid LLM strategies including retrieval-augmented generation and column-aware filtering -
Neuro-Symbolic Approach to Depression Detection - Fall 2024
Integrating rule-based systems with Neural models such as Mental-RoBERTa for linguistic-based depression classification -
Detection of Insomnia from Clinical Notes - Spring 2025
SMM4H-HeaRD 2025 shared task system using transformer models and structured metadata for automated insomnia detection from MIMIC-III clinical notes -
SmartMeet – AI Meeting Summarizer - Fall 2024
Automated meeting summarization tool with key discussion point extraction, personalized to-do lists, and JIRA ticket integration