Recent advances in large language models highlights the importance of deepfake text detection to avoid potential risks such as fake news propagation and plagiarism. We build a wild testbed by gathering texts from various human writings and deepfake texts generated by different LLMs.
Explanations generated through single-pass prompting often lack sufficiency and conciseness, we develop an information bottleneck method to produce refined explanations that are sufficient and concise. (Findings of ACL 2023)
When communicating with elders with cognitive impairment, cognitive stimulation (CS) helps to maintain the cognitive health of elders. We construct a Chinese CS conversation dataset and propose a multi-source knowledge fusion method for CS dialogue. (ACL 2023)
We study a flexible and efficient zero-short learning method. Given a zero-shot task, we first generate a dataset from scratch using PLMs in an unsupervised manner. Then, we train a tiny task model under the supervision of the synthesized dataset. (EMNLP 2022)
The first attempt to leverage external knowledge to accurately perceive and appropriately express implicit emotions in empathetic dialogue generation. (AAAI 2022)