JNLE Special Issue: Machine Translation Using Comparable Corpora


Statistical machine translation based on parallel corpora has been very successful. The major search engines’ translation systems, which are used by millions of people, are primarily using this approach, and it has been possible to come up with new language pairs in a fraction of the time that would be required when using more traditional rule-based methods.

In contrast, research on comparable corpora is still at an earlier stage. Comparable corpora can be defined as monolingual corpora covering roughly the same subject area in different languages but without being exact translations of each other.

However, despite its tremendous success, the use of parallel corpora in MT has a number of drawbacks:

  1. It has been shown that translated language is somewhat different from original language, for example the “associative texture” is lost in translation.
  2. As they require translation, parallel corpora will always be a far scarcer resource than comparable corpora. This is a severe drawback for a number of reasons:

    1. Among the about 7000 world languages, of which 600 have a written form, the vast majority are of the "low resource" type.
    2. The number of possible language pairs increases with the square of the number of languages. When using parallel corpora, one bitext is needed for each language pair. When using comparable corpora, one monolingual corpus per language suffices.
    3. For improved translation quality, translation systems specialized on particular genres and domains are desirable. But it is far more difficult to acquire appropriate parallel rather than comparable training corpora.
    4. As language evolves over time, the training corpora should be updated on a regular basis. Again, this is more difficult in the parallel case.

For such reasons it would be a big step forward if it were possible to base statistical machine translation on comparable rather than on parallel corpora: The acquisition of training data would be far easier, and the unnatural “translation bias” (source language shining through) within the training data would be avoided.

But is there any evidence that this is possible? Motivation for using comparable corpora in MT research comes from a cognitive perspective: Experience tells that persons who have learned a second language completely independently from their mother tongue can nevertheless translate between the languages. That is, human performance shows that there must be a way to bridge the gap between languages which does not rely on parallel data. Using parallel data for MT is of course a nice shortcut. But avoiding this shortcut by doing MT based on comparable corpora may well be a key to a better understanding of language, and to better MT quality.

Work on comparable corpora in the context of MT has been ongoing for almost 20 years. It has turned out that this is a very hard problem to solve, but as it is among the grand challenges in multilingual NLP, interest has steadily increased. Apart from the increase in publications this can be seen from the considerable number of research projects (such as ACCURAT and TTC) which are fully or partially devoted to MT using comparable corpora. Given also the success of the workshop series on Building and Using Comparable Corpora (BUCC), which is now in its seventh year, and following the publication of a related book (http://www.springer.com/computer/ai/book/978-3-642-20127-1), we think that it is now time to devote a journal special issue to the field. It is meant to bundle the latest top class research, make it available to everybody working in the field, and at the same time give an overview on the state of the art to all interested researchers.

BUCC Workshops before the Journal Special Issue

IssueVenue ChairpersonsProceedings
BUCC 2008LREC, Marrakech Pierre Zweigenbaum, Éric Gaussier, Pascale Fung PDF
BUCC 2009ACL, Singapore Pascale Fung, Pierre Zweigenbaum, Reinhard Rapp PDF [Individual papers]
BUCC 2010LREC, Valetta Reinhard Rapp, Pierre Zweigenbaum, Serge Sharoff PDF
BUCC 2011ACL, Portland Pierre Zweigenbaum, Reinhard Rapp, Serge Sharoff PDF [Individual papers]
BUCC 2012LREC, Istanbul Reinhard Rapp, Marko Tadić, Serge Sharoff, Andrejs Vasiļjevs, Pierre Zweigenbaum PDF
BUCC 2013ACL, Sofia Serge Sharoff, Pierre Zweigenbaum, Reinhard Rapp PDF [Individual papers]
BUCC 2014LREC, Reykjavik Pierre Zweigenbaum, Ahmet Aker, Serge Sharoff, Stephan Vogel, Reinhard Rapp PDF