Abstract
In this paper, we explore alternative ways to train a neural machine translation system in a multi-domain scenario. We investigate data concatenation (with fine tuning), model stacking (multi-level fine tuning), data selection and multi-model ensemble. Our findings show that the best translation quality can be achieved by building an initial system on a concatenation of available out-of-domain data and then fine-tuning it on in-domain data. Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on. Data selection did not give the best results, but can be considered as a decent compromise between training time and translation quality. A weighted ensemble of different individual models performed better than data selection. It is beneficial in a scenario when there is no time for fine-tuning an already trained model.
| Original language | English |
|---|---|
| Pages | 66-73 |
| Number of pages | 8 |
| Publication status | Published - 15 Dec 2017 |
| Event | Proceedings of the 14th International Conference on Spoken Language Translation - Tokyo, Japan Duration: 14 Dec 2017 → 15 Dec 2017 |
Conference
| Conference | Proceedings of the 14th International Conference on Spoken Language Translation |
|---|---|
| Country/Territory | Japan |
| City | Tokyo |
| Period | 14/12/17 → 15/12/17 |