3

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

We presents ReverseGen, a new paradigm for generating effective synthetic data from the “failure” cases of a target model on specific tasks. We optimize a language model by rewarding it for generating instructions that cause failures in the target model while employing a selection strategy to maintain instruction diversity. This optimization objective is achieved through an iterative preference learning algorithm.

Exploring the Reliability of Large Language Models as Customized Evaluators for Diverse NLP Tasks

Our analysis shows that 1) LLM evaluators can generate unnecessary criteria or omit crucial criteria, resulting in a slight deviation from the experts. 2) LLM evaluators excel in general criteria, such as fluency, but face challenges with complex criteria, such as numerical reasoning. (COLING 2025)

MAGE: Machine-generated Text Detection in the Wild

Recent advances in large language models highlights the importance of deepfake text detection to avoid potential risks such as fake news propagation and plagiarism. We build a wild testbed by gathering texts from various human writings and deepfake texts generated by different LLMs. (ACL 2024)

Explanation Regeneration via Information Bottleneck

Explanations generated through single-pass prompting often lack sufficiency and conciseness, we develop an information bottleneck method to produce refined explanations that are sufficient and concise. (Findings of ACL 2023)

A Cognitive Stimulation Dialogue System with Multi-source Knowledge Fusion for Elders with Cognitive Impairment

When communicating with elders with cognitive impairment, cognitive stimulation (CS) helps to maintain the cognitive health of elders. We construct a Chinese CS conversation dataset and propose a multi-source knowledge fusion method for CS dialogue. (ACL 2023)

Efficient Zero-shot Learning via Dataset Generation

We study a flexible and efficient zero-short learning method. Given a zero-shot task, we first generate a dataset from scratch using PLMs in an unsupervised manner. Then, we train a tiny task model under the supervision of the synthesized dataset. (EMNLP 2022)

Knowledge Bridging for Empathetic Dialogue Generation

The first attempt to leverage external knowledge to accurately perceive and appropriately express implicit emotions in empathetic dialogue generation. (AAAI 2022)