Try it out! language
E.g. :

Hello!

Result Translated Text....
CURL # No command to display
Language translation

The objective of this microservice is to implement a translation system using Marian Server and Marian generated models.
The specification file is defined according to openapi v3 (OAS3).


About the model

The model used in this API was trained using the Framework Marian-NMT. The model is based on the transformer Architecture. It was trained using The Opus-MT Giga-fren Corpus, Europarl V7 Corpus and validated using the Tatoeba en-fr Corpus.

Comparison

Marian-NMT is a really efficient Neural Machine Translation Framework. It's now the framework behind Microsoft translator and the Bergamot project. An important aspect to take into account, is how to quantify the performance of the translation system. The most used metric in this type of task is the BLEU metric. Such evaluation can be hard to implement, thus a tool like SacreBleu comes in handy. The graphic below was made thanks to the Bergamot projet. Bergamot made a comparison between popular translation solutions using SacreBleu. From there, we can easily compare our solution with standardized datasets from SacreBleu.

The systems compared below are:
  1. ICoSys: ICoSys translation microservice
  2. Bergamot: Bergamot-translation
  3. Google: Google translation API
  4. Microsoft: Azure Cognitive Services Translator API

We can see that the model used in the microservice is not the most efficient. Nonetheless, it still scores a better BLEU score than the Google translation API. Bergamot performs slightly better than the Microsoft API globally, and as mentioned before Bergamot is based on Marian-NMT. We can conclude that the model used behind this microservice offers good overall performances.