About me
"I am a PhD student at the University of Florida, working under the guidance of Dr. Bonnie Dorr. My research is primarily centered on Natural Language Inference (NLI), a field that explores the semantic relationships between a given premise and hypothesis. Specifically, I focus on identifying ambiguous premise-hypothesis pairs that allow for multiple valid interpretations and integrating these complexities into general NLI systems. Additionally, my work involves recognizing instances that require additional commonsense knowledge for precise inference and developing methods to generate and incorporate this knowledge, thereby enhancing the performance of NLI pipelines."Natural Language Inference Ambiguity in NLI Commonsense-Augmented Inference Abstract Meaning Representations
Recent Publications
Jan 05, 2026
Preprint
Preprint
This study examines whether Large Language Models (LLMs) can generate useful commonsense axioms for Natural Language Inference and shows that a hybrid approach, which selectively provides highly factual axioms based on judged helpfulness, yields consistent accuracy improvements across all tested configurations, demonstrating the effectiveness of selective knowledge access for NLI.
Nov 08, 2025
Accepted
Accepted to NLPerspectives EMNLP 2025
This position paper argues that annotation disagreement in Natural Language Inference (NLI) is not mere noise but often reflects meaningful interpretive variation, especially when triggered by ambiguity in the premise or hypothesis. While underspecified guidelines and annotator behavior can contribute to variation, content-based ambiguity offers a process-independent signal of divergent human perspectives. We call for a shift toward ambiguity-aware NLI by systematically identifying ambiguous input pairs and classifying ambiguity types.
November 02, 2024
Accepted
Accepted to FEVER EMNLP 2024
We implement AMREx, an Abstract Meaning Representation (AMR)-based veracity prediction and explanation system for fact verification using a combination of Smatch, an AMR evaluation metric to measure meaning containment and textual similarity. AMREx surpasses the AVeriTec baseline accuracy showing the effectiveness of our approach for real-world claim verification.
November 02, 2024
Accepted
Accepted to SciCon EMNLP 2024
We perform a comparative analysis of rule-based and DNN models for political bias detection by contrasting the opaque architecture of a deep learning model with the transparency of a linguistically informed rule-based model, and show that the rule-based model performs consistently across different data conditions and offers greater transparency, whereas the deep learning model is dependent on the training set and struggles with unseen data.
July 12, 2024
Accepted
Accepted to NLDB 2024
We implement Divergence-Aware Hallucination-Remediated SRL projection (DAHRS), leveraging linguistically-informed alignment remediation followed by greedy First-Come First-Assign (FCFA) SRL projection. DAHRS improves the accuracy of SRL projection without additional transformer-based machinery, beating XSRL in both human and automatic comparisons, and advancing beyond headwords to accommodate phrase-level SRL projection (e.g., EN-FR, EN-ES).