Publications

TACL2025 under review Investigating Language-Specific Calibration for Pruning Multilingual Large Language Models

Simon Kurz, Jian-Jia Chen, Lucie Flek, Zhixue Zhao

Preprint Tracing and Reversing Rank-One Model Edits

Paul Youssef, Zhixue Zhao, Christin Seifert, Jörg Schlötterer

Preprint Implicit Priors Editing in Stable Diffusion via Targeted Token Adjustment

Feng He, Chao Zhang, Zhixue Zhao

ICML2025 Rulebreakers Challenge: Revealing a Blind Spot in Large Language Models’ Reasoning with Formal Logic.

Jason Chan, Robert Gaizauskas, Zhixue Zhao

ICML2025 Position: Editing Large Language Models Poses Serious Safety Risks.

Paul Youssef, Zhixue Zhao, Daniel Braun, Jörg Schlötterer, Christin Seifert

ICLR2025 ScImage: How good are multimodal large language models at scientific text-to-image generation?

Leixin Zhang, Yinjie Cheng, Weihe Zhai, Steffen Eger, Jonas Belouadi, Fahimeh Moafian, Zhixue Zhao.

ACL2025 Main Preprint coming soon. Knowledge Image Matters: Improving Knowledge-Based Visual Reasoning with Multi-Image Large Language Models

Guanghui Ye, Huan Zhao, Zhixue Zhao, Xupeng Zha, Yang Liu, Zhihua Jiang

ACL2025 Findings Preprint coming soon. Explainable Hallucination through Natural Language Inference Mapping

Wei-Fan Chen, Zhixue Zhao, Akbar Karimi, Lucie Flek

NAACL2025 Main Can We Reverse In-Context Knowledge Edits?

Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert.

NAACL2025 Main Oral Has this Fact been Edited? Detecting Knowledge Edits in Language Models?

Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert.

ECIR2025 Do LLMs Provide Consistent Answers to Health-Related Questions Across Languages? Ipek Baris Schlicht, Zhixue Zhao, Burcu Sayin, Lucie Flek, Paolo Rosso

TACL2024 Vol. 12 Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization.

George Chrysostomou, Zhixue Zhao, Miles Williams, and Nikolaos Aletras.

NAACL 2024 Main (oral presentation) Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models.

Zhixue Zhao and Nikolaos Aletras.

ACL 2023 Main [Oral Presentation (the 1st talk was ours)] Incorporating Attribution Importance for Improving Faithfulness Metrics.

Zhixue Zhao and Nikolaos Aletras.

EMNLP 2022 Findings On the Impact of Temporal Concept Drift on Model Explanations.

Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, and Nikolaos Aletras.

JAMIA Vol.7 4(2024) The FAIR database: facilitating access to public health research literature. Zhixue Zhao, James Thomas, Gregory Kell, Claire Stansfield, Mark Clowes, Sergio Graziosi, Jeff Brunton, Iain James Marshall, Mark Stevenson.

ACL 2023 Main [Oral Presentation] (the 1st talk was ours) Incorporating Attribution Importance for Improving Faithfulness Metrics. The 61st Annual Meeting of the Association for Computational Linguistics. Zhixue Zhao and Nikolaos Aletras.

[ReLM@AAAI24] Use ReAGent via Inseq ReAGent: A Model-agnostic Feature Attribution Method for Generative Language Models.

Zhixue Zhao and Boxuan Shan. 2024.

EMNLP 2022 Findings On the Impact of Temporal Concept Drift on Model Explanations. Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, and Nikolaos Aletras

Online Social Networks and Media, 29, 100205. 2022. Journal Utilizing Subjectivity Level to Mitigate Identity Term Bias in Toxic Comments Classification. Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner.

WWW 2021 Companion A Comparative Study of Using Pre-trained Language Models for Toxic Comment Classification. Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2021.

CICLing 2019 Detecting Toxic Content Online and the Effect of Training Data on Classification Performance. Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2019.


Lecturer @ShefNLP