Avatar

Michael Sullivan

Post-Doc Researcher (Saarland University)

msullivan at lst dot uni-saarland dot de

CV

GitHub

LinkedIn

Google Scholar


About me

I am a post-doc in the Computational Linguistics Group at UdS. My research interests currently lie in logical reasoning and tool use with LLMs: past projects include research on shallow heuristics that NLI models leverage (instead of using actual reasoning ability); language models over logical-form representations (rather than text); and the automatic generation of tool use environments for training LLM agents with reinforcement learning.

My PhD is in Linguistics (under JP Koenig) and my MSc is in Computer Science and Engineering (under Rohini K Srihari)—both at the University at Buffalo. My Erdős number is four.

Education

PhD in Linguistics

Semantics/Pragmatics Track

Thesis: Language Modeling over Logical Forms

University at Buffalo (2020-2025)


MS in Computer Science and Engineering

Research/Honors Track

MS Project: Probing NLI Models with External Negation

University at Buffalo (2023-2024)


BA in Linguistics

With Research Distinction

Minors: Spanish, German

The Ohio State University (2016-2019)


Publications

Michael Sullivan, Mareike Hartmann, and Alexander Koller (2025). Procedural Environment Generation for Tool-Use Agents. (In Print: to be published at EMNLP 2025).

Nicola Horst, Davide Mazzaccara, Antonia Schmidt, Michael Sullivan, Filippo Momentè, Luca Franceschetti, Philipp Sadler, Sherzod Hakimov, Alberto Testoni, Raffaella Bernardi, Raquel Fernández, Alexander Koller, Oliver Lemon, David Schlangen, Mario Giulianelli, and Alessandro Suglia (2025). Playpen: An Environment for Exploring Learning Through Conversational Interaction. (In Print: to be published at EMNLP 2025).

Luisa Geiger, Mareike Hartmann, Michael Sullivan, and Alexander Koller (2025). Evaluating Spatiotemporal Consistency in Automatically Generated Sewing Instructions. (In Print: to be published at EMNLP 2025).

Michael Sullivan (2025). Exploring Graph Representations of Logical Forms for Language Modeling. In Findings of the Association for Computational Linguistics: ACL 2025.

Michael Sullivan (2025). Language Modeling over Logical Forms (Doctoral dissertation, University at Buffalo).

Michael Sullivan (2024). It is not True that Transformers are Inductive Learners: Probing NLI Models with External Negation. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), 1924 - 1945.

Michael Sullivan, Navid Madani, Sougata Saha, and Rohini Srihari (2023). Positional Transformers for Claim Span Identification. In Forum for Information Retrieval Evaluation (Working Notes).

Sougata Saha, Michael Sullivan, and Rohini Srihari (2023). Hate Speech Detection in Low Resource Indo-Aryan Languages. In Forum for Information Retrieval Evaluation (Working Notes).

Michael Sullivan, Mohammed N. Yasin, and Cassandra L. Jacobs (2023). University at Buffalo at SemEval-2023 Task 11: MASDA–Modelling Annotator Sensibilities through DisAggregation. In Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023).

Nominated for Best System Paper Award at SemEval 2023


Michael Sullivan (2023). Formal-Logical Distributional Semantics: Applications to Property Inference. Workshop on Knowledge Augmented Methods for Natural Language Processing at AAAI 2023.