The objective of this microservice is to implement a translation system using Marian Server and Marian generated models.
The specification file is defined according to openapi v3 (OAS3).
The model used in this API was trained using the Framework Marian-NMT. The model is based on the transformer Architecture. It was trained using The Opus-MT Giga-fren Corpus, Europarl V7 Corpus and validated using the Tatoeba en-fr Corpus.
Marian-NMT is a really efficient Neural Machine Translation Framework. It's now the framework behind Microsoft translator and the Bergamot project. An important aspect to take into account, is how to quantify the performance of the translation system. The most used metric in this type of task is the BLEU metric. Such evaluation can be hard to implement, thus a tool like SacreBleu comes in handy. The graphic below was made thanks to the Bergamot projet. Bergamot made a comparison between popular translation solutions using SacreBleu. From there, we can easily compare our solution with standardized datasets from SacreBleu.
We can see that the model used in the microservice is not the most efficient. Nonetheless, it still scores a better BLEU score than the Google translation API. Bergamot performs slightly better than the Microsoft API globally, and as mentioned before Bergamot is based on Marian-NMT. We can conclude that the model used behind this microservice offers good overall performances.