In recent years, Neural Machine Translation (NMT) has achieved state-of-the art performance in translating from a language; source language, to another; target language. However, many of the proposed methods use word embedding techniques to represent a sentence in the source or target language. Character embedding techniques for this task has been suggested to represent the words in a sentence better. Moreover, recent NMT models use attention mechanism where the most relevant words in a source sentence are used to generate a target word. The problem with this approach is that while some words are translated multiple times, some other words are not translated. To address this problem, coverage model has been integrated into NMT to keep track of already-translated words and focus on the untranslated ones. In this research, we present a new architecture in which we use character embedding for representing the source and target words, and also use coverage model to make certain that all words are translated. We compared our model with the previous mod els and our model shows comparable improvements. Our model achieves an improvement of 2.87 BLEU (BiLingual Evaluation Understudy) score over the baseline; attention model, for German-English translation, and 0.34 BLEU score improvement for Catalan-Spanish translation.