3

Deepfake Text Detection in the Wild

Recent advances in large language models highlights the importance of deepfake text detection to avoid potential risks such as fake news propagation and plagiarism. We build a wild testbed by gathering texts from various human writings and deepfake texts generated by different LLMs.

Explanation Regeneration via Information Bottleneck

Explanations generated through single-pass prompting often lack sufficiency and conciseness, we develop an information bottleneck method to produce refined explanations that are sufficient and concise. (Findings of ACL 2023)

A Cognitive Stimulation Dialogue System with Multi-source Knowledge Fusion for Elders with Cognitive Impairment

When communicating with elders with cognitive impairment, cognitive stimulation (CS) helps to maintain the cognitive health of elders. We construct a Chinese CS conversation dataset and propose a multi-source knowledge fusion method for CS dialogue. (ACL 2023)

Efficient Zero-shot Learning via Dataset Generation

We study a flexible and efficient zero-short learning method. Given a zero-shot task, we first generate a dataset from scratch using PLMs in an unsupervised manner. Then, we train a tiny task model under the supervision of the synthesized dataset. (EMNLP 2022)

Knowledge Bridging for Empathetic Dialogue Generation

The first attempt to leverage external knowledge to accurately perceive and appropriately express implicit emotions in empathetic dialogue generation. (AAAI 2022)