I am Cass Zhixue Zhao, a lecturer in Natural Language Processing at the Computer Science Department of the University of Sheffield. My long-term research goal is to enable trustworthy, responsible, and efficient NLP models. These days, I am interested in anything related to interpretability and large language models (LLMs). My recent research projects focus on model compression, model editing, and text-to-image models.
Previously, I worked as a Postdoc researcher on explainable AI and responsible AI. The overarching aim is to demystify predictions made by black-box LLMs, making them easier to understand and trustworthy. The work also addresses model hallucination to ensure the reliability of LLMs, alongside exploring model compression techniques that mitigate compute demands and thus foster inclusivity within NLP research. Back in 2020, I worked as a research assistant within the same department, working on NIHR-funded NLP projects for systematic reviews of public health research. My Ph.D. research, which was funded by the University of Sheffield, looked at transfer learning and mitigating model bias for hate speech detection.
Test TestI am looking for highly motivated PhD students. A fully funded PhD (UKRI rate) studentship (3.5 years) is available. Contact me if interested.
Test TestTrustworthy AI, Model explanations, Generative models, Hallucination, Faithfulness, Text generation
Test TestZhixue Zhao, George Chrysostomou, Miles Williams, and Nikolaos Aletras. 2024. Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization. 2024 Transactions of the Association for Computational Linguistics. [TACL Vol. 12 (2024)]
Zhixue Zhao and Nikolaos Aletras. 2024. Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics. [NAACL 2024 Main] (oral presentation)
Zhixue Zhao and Nikolaos Aletras. 2024. Incorporating Attribution Importance for Improving Faithfulness Metrics. The 61st Annual Meeting of the Association for Computational Linguistics. [ACL 2023 Main] [Oral Presentation] (the 1st talk was ours)
Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, and Nikolaos Aletras. 2022. On the Impact of Temporal Concept Drift on Model Explanations. In Findings of the Association for Computational Linguistics [EMNLP 2022 Findings]
Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2022. Utilizing Subjectivity Level to Mitigate Identity Term Bias in Toxic Comments Classification. Online Social Networks and Media, 29, 100205. [Journal]
Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2021. A Comparative Study of Using Pre-trained Language Models for Toxic Comment Classification. In Companion Proceedings of the Web Conference 2021 (pp. 500-507) [WWW 2021]
(More papers in publications.)
Test Test Test