About Me

I am Cass Zhixue Zhao, a lecturer in Natural Language Processing at the Computer Science Department of the University of Sheffield. My long-term research goal is to enable trustworthy, responsible, and efficient NLP models. These days, I am interested in anything related to interpretability and large language models (LLMs). My recent research projects focus on model compression, model editing, and text-to-image models.

Previously, I worked as a Postdoc researcher on explainable AI and responsible AI. The overarching aim is to demystify predictions made by black-box LLMs, making them easier to understand and trustworthy. The work also addresses model hallucination to ensure the reliability of LLMs, alongside exploring model compression techniques that mitigate compute demands and thus foster inclusivity within NLP research. Back in 2020, I worked as a research assistant within the same department, working on NIHR-funded NLP projects for systematic reviews of public health research. My Ph.D. research, which was funded by the University of Sheffield, looked at transfer learning and mitigating model bias for hate speech detection.

Test Test

I am looking for highly motivated PhD students. One funded PhD positions for 3.5 years are available, UKRI rate. Welcome to contact me with your CV ready. CSC or Self-funding with your own topic is welcome too.

Test Test

Selected Publications

Leixin Zhang, Yinjie Cheng, WEIHE ZHAI, Steffen Eger, Jonas Belouadi, Fahimeh Moafian, Zhixue Zhao. ScImage: How good are multimodal large language models at scientific text-to-image generation? The Thirteenth International Conference on Learning Representations. ICLR2025

Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert. 2025. Can We Reverse In-Context Knowledge Edits? The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL. NAACL2025

Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert. 2025. Has this Fact been Edited? Detecting Knowledge Edits in Language Models? The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL. NAACL2025

George Chrysostomou, Zhixue Zhao, Miles Williams, and Nikolaos Aletras. 2024. Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization. 2024 Transactions of the Association for Computational Linguistics. TACL Vol. 12 (2024)

Zhixue Zhao, James Thomas, Gregory Kell, Claire Stansfield, Mark Clowes, Sergio Graziosi, Jeff Brunton, Iain James Marshall, Mark Stevenson. The FAIR database: facilitating access to public health research literature. JAMIA Vol.7 4(2024)

Zhixue Zhao and Nikolaos Aletras. 2024. Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics. NAACL 2024 Main (oral presentation)

Zhixue Zhao and Nikolaos Aletras. 2024. Incorporating Attribution Importance for Improving Faithfulness Metrics. The 61st Annual Meeting of the Association for Computational Linguistics. ACL 2023 Main [Oral Presentation (the 1st talk was ours)]

Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, and Nikolaos Aletras. 2022. On the Impact of Temporal Concept Drift on Model Explanations. In Findings of the Association for Computational Linguistics EMNLP 2022 Findings

Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2022. Utilizing Subjectivity Level to Mitigate Identity Term Bias in Toxic Comments Classification. Online Social Networks and Media, 29, 100205. Journal

Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2021. A Comparative Study of Using Pre-trained Language Models for Toxic Comment Classification. In Companion Proceedings of the Web Conference 2021 (pp. 500-507) WWW 2021

(More papers in publications.)

Test Test Test

Project


Lecturer @ShefNLP