Sequence-to-Sequence Based English-Chinese Translation Model

Intellectual Systems and Technologies
Authors:
Abstract:

In recent years, with the continuous improvement of theory in artificial intelligence, artificial neural networks has become novel tools for machine translation. Compared with traditional Statistical Machine Translation (SMT), neural network based Neural Machine Translation (NMT) transcends SMT in many aspects such as translation accuracy, long distance reordering, syntax, tolerance to noisy data et al. In 2014, with the emergence of sequence-to-sequence (seq2seq) models and attentional mechanisms introduced into the model, NMT was further refined and its performance was getting better and better. This article uses the current popular sequence-to-sequence model to construct a neural machine translation model from English to Chinese. In addition, this paper uses Long-Short Term Memory (LSTM) to replace the traditional RNN in order to solve the problem of gradient disappearance and gradient explosion that it faces in long-distance dependence. The attention mechanism has also been introduced into this article. It allows neural networks to pay more attention to the relevant parts of the input sequences and less to the unrelated parts when performing prediction tasks. In the experimental part, this article uses TensorFlow to build the NMT model described in the article.