Since most competitors to Google offerings aren't going to have a hugely profitable core business with which to fund all the data collection and normalization that goes into building a high quality ML system, the future for poorly capitalized competitors to compete seems bleak to me. This seems to support some of the growing rumblings about enforcing antitrust laws against the large tech companies.
Edit: better, not bigger.
In this case, the author claims pretty good accuracy, almost on par with Google Brain's!
On my test set of 3,000 sentences, the translator obtained a BLEU score of 0.39. This score is the benchmark scoring system used in machine translation, and the current best I could find in English to French is around 0.42 (set by some smart folks as Google Brain). So, not bad.
Pretraining is a very cool idea, I have also seen some good results with pretrained embeddings after doing language modelling. Fast ai discusses this and I think even has some pretrained embeddings available in their library!
The basic idea is to use word vector embeddings to build a source<->target dictionary, then combine this with a language recognition model to iteratively bootstrap a set of source<->target training examples for use with a conventional ML approach.
To that, though, I'm definitely not holding my breath :)
Also people make the assumption that as soon as we make strong AI comparable to a human we will be to translate anything and everything (let's say we are excluding the last mile for arguments sake). That assumption ignores an important fact that sometimes translation is a team effort where certain words, phrases or concepts are debated among multiple translators to reach a consensus. It's not always done by a single intelligence.
Some people might argue that's because people have far more limited capacity to consider all the examples in the corpus whereas a machine can consider all of lightning fast and thus can arrive at the right answer.
A perfect edge case that illustrates why that doesn't matter and where multiple human intelligences will often grapple with how something should be translated would be what name to give to a movie you are translating to an international audience. The same movie often has quite different names depending on which language it gets translated into. There isn't actually a correct answer there is just answers that are deemed 'good enough'.
Secondly, you are conflating concepts in my opinion. Localizing a movie may involve translators translating lines, but it also involves the creative work of localizing the title and other things, as you mentioned. A machine translator by today's definition translates a string of text in one language to a string of text in another. We needn't consider every type of work a human translator might do; it would be quite enough of a difference to close the gap on translating strings straightforwardly.
Also, anyone who is able to verify that a translation conveys a meaning in enough of the same direction as the original utterance by definition doesn't need a translation as they know both the source and target language.
It's everybody else who is not able to verify for whom the accuracy matters for they have no recourse but to trust it.They are frequently led astray.
A couple of examples to illustrate.
掘り炬燵 (horigotatsu) is a noun referring to "low, covered table placed over a hole in the floor of a Japanese-style room"
Now, given this is something that doesn't exist in any Western, English speaking country it simply doesn't have a mapping in English. The best that can be done is to give an explanation of what it is.
Google translate "translates" it as "digging". Welcome to the last mile. In this case Google should just spit out an explanation of what it is. Digging is entirely incorrect and unhelpful.
But it gets worse. Imagine if it's used in a sentence. Here is a good example of a last mile issue in translation. It's impossible for you to translate it directly, so you have to fall back to a best effort attempt and either simplify and lose some information or stop mid-sentence and give an explanation of what the thing actually is.
This sentence is all kinds of problematic from a translation point of view.
Google translates it as: "I sat on a digging stone and ate rice."
That borders on D+/C- in terms of quality for me. But there are a few good reasons as to why.
The original Japanese doesn't give the context of who is performing the action because that's simply not necessary to say in Japanese it's almost always just inferred from context in the moment and that gets lost when you only have a string. Thus it's possible this could be a "he, she, it, we, I, they". If the machine is forced to pick one option then it will pick one option.
Then there is the horigotatsu part which gets "translated" as "digging stone". What the hell is a digging stone? It ought to just say horigotatsu* and have a footnote. Machine translation today doesn't do footnotes. I wish it did.
Again there is a lack of context as to the meaning of ご飯 (gohan) which technically can mean cooked white rice but in this case most likely refers to a "meal". Though which meal is not specified and it could be breakfast, lunch or dinner but I'm going to guess it's dinner.
But what should the translation actually be? Is it even fundamentally "translateable"?
One valid translation would be "we sat in the horigotatsu and had dinner". That still requires an explanation.
Anyway, I hope it's a little clearer what I mean that it's not actually always possible to translate things.
I think we can hit parity with humans one day, but it requires fundamentally rethinking certain things at a UX level. For instance if instead of just an input form Google translate was more like a chatbot that could probe for more context when needed that's more my idea of where things need to ultimately wind up. Perhaps a model like rap genius where annotations contain extra details around possible alternatives and why the current word was chosen.... This is my 2 cents on the issue.
Being able to provide additional context would be great, but I don't see why it would have to be done in a "human" way to satisfy the constraints.
So true. Often the last mile isn't solvable even for highly trained humans: it's not like it's hard, it just impossible.
Translators have to take alternative ways to convey the meaning. Making choices, and often losing some parts completely.
Machines have no chances.
This does not really apply to your movie name example, but in a text machines could try to escape to a meta-level, like humans sometimes do: "There is an old gaelic saying, which, roughly translated, goes something like this..."
In cases where there is no direct equivalent or even an appropriate localization we usually resort to giving a long form explanation. If the machines can do that, they've won.
I can't see the current techniques getting us that far. It's going to require some new, creative thinking to really push the envelope.
My point is to reach true parity with human capabilities machine translation needs to reach the point where it can also recognize "I can't translate that" instead of always trying to spit out an answer and returning hilarious garbage in such situations.
Another behavior I would expect to see is sometimes returning multiple possible results. This is common when a translator doesn't know the exact context but can imagine the range of possible contexts in which an utterance can occur and conveys them all in order to get the message across.
Currently I don't see anyone trying to take this approach. It's all simple one string in gives one string out.
Nature was able to do this. Sure, it took a couple billion years of evolution to get to this point, but it is doable. I'm betting that the chances of us inventing Strong AI within the next 100 years is almost a near certainty.
Modern neural translation techniques don’t impose a 1:1 mapping on translations, and this works reasonably well between major European languages (English/German/French/Spainish/Italian) where there are large cross language corpas and large monolingual corpa available. In these languages you’ll often get single words translated to short phrases.
It’s true that this doesn’t solve the Japanese examples you give below, but these seems a question of degree rather than seeming impossible.
The problem most people seem to have understanding this is two-fold:
1. They assume if you just had more data and better algorithms you could get better results.
2. They have never translated things themselves and come across a case where something didn't have a translation.
Remove the machines from the equation entirely. It's not always possible in 100% of cases for people to do it.
Naturally linguistically similar languages have more overlap and hence better success overall but that's really just a nice to have.
No matter how similar English and French are, if you ask someone to translate a meme that started in 4chan or Reddit into French you will quickly encounter a case where attempting to do so just doesn't work. I'm sure there are plenty of better examples than that but I don't know French.
It's an elephant in the room and stunningly few people seem to see it standing there.
Lol Google really??
In fact case in point 'elephant in the room' if it were used inside a joke that relied on using the elephant as part of the joke would not be possible to translate. It just wouldn't make sense.
I think you are complaining about the literal translation of what is supposed to be a metaphor. I see your point, but I'm not sure that is as big a problem as you imply.
https://www.ukessays.com/essays/english-language/newmark-and... is some useful reading here.
I remember watching a program back in high school subtitling music videos with their lyrics machine translated from English to Hungarian. The absurdity was indeed hilarious for a brief period.