Skip to content
/ UncSamp Public

Implementation of our paper "Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation" to appear in ACL-2021.

Notifications You must be signed in to change notification settings

wxjiao/UncSamp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UncSamp: Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation

Implementation of our paper "Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation" to appear in ACL 2021. [paper]

Brief Introduction

Self-training has proven effective for improving NMT performance by augmenting model training with synthetic parallel data. The common practice is to construct synthetic data based on a randomly sampled subset of large-scale monolingual data, which we empirically show is sub-optimal. In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data. To this end, we compute the uncertainty of monolingual sentences using the bilingual dictionary extracted from the parallel data. Intuitively, monolingual sentences with lower uncertainty generally correspond to easy-to-translate patterns which may not provide additional gains. Accordingly, we design an uncertainty-based sampling (UncSamp) strategy to efficiently exploit the monolingual data for self-training, in which monolingual sentences with higher uncertainty would be sampled with higher probability. Experimental results on large-scale WMT English⇒German and English⇒Chinese datasets demonstrate the effectiveness of the proposed method. Extensive analyses provide a deeper understanding of how the proposed method improves the translation performance.

Figure 1: The framework of self-training with uncertainty-based sampling.

Reference Performance

We evaluate the proposed UncSamp approach on two high-resource translation tasks. As shown, our Transformer-Big models trained on the authentic parallel data achieve the performance competitive with or even better than the submissions to WMT competitions. Based on such strong baselines, self-training with RandSamp improves the performance by +2.0 and +0.9 BLEU points on En⇒De and En⇒Zh tasks respectively, demonstrating the effectiveness of the large-scale self-training for NMT models.

With our UncSamp approach, self-training achieves further significant improvement by +1.1 and +0.6 BLEU points over the random sampling strategy, which demonstrates the effectiveness of exploiting uncertain monolingual sentences.

Table 1: Evaluation of translation performance.

Further analyses suggest that our UncSamp approach does improve the translation quality of high-uncertainty sentences and also benefits the prediction of low-frequency words at the target-side.

Table 2: Analysis for uncertain sentences.

Table 3: Analysis for low frequency words.

Public Impact

Citation

Please kindly cite our paper if you find it helpful:

@inproceedings{jiao2021self,
  title={Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation},
  author={Wenxiang Jiao and Xing Wang and Zhaopeng Tu and Shuming Shi and Michael R. Lyu and Irwin King},
  booktitle = {ACL},
  year      = {2021}
}

About

Implementation of our paper "Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation" to appear in ACL-2021.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published