Google Translate Service

From GM-RKB
Jump to navigation Jump to search

A Google Translate Service is a language translation service that is a Google service.



References

2016


2016

  • http://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html
    • QUOTE: Translate made its debut in 2006 and since then has become one of Google’s most reliable and popular assets; it serves more than 500 million monthly users in need of 140 billion words per day in a different language. It exists not only as its own stand-alone app but also as an integrated feature within Gmail, Chrome and many other Google offerings, where we take it as a push-button given — a frictionless, natural part of our digital commerce.

2016

  • http://spectrum.ieee.org/tech-talk/computing/software/google-translate-gets-a-deep-learning-upgrade
    • QUOTE: Google Translate has already begun using neural machine translation for its 18 million daily translations between English and Chinese. … The deep-learning approach of Google’s neural machine translation relies on a type of software algorithm known as a recurrent neural network. The neural network consists of nodes, also called artificial neurons, arranged in a stack of layers consisting of 1,024 nodes per layer. A network of eight layers acts as the “encoder,” which takes the sentence targeted for translation — let’s say from Chinese to English — and transforms it into a list of “vectors.” Each vector in the list represents the meanings of all the words read so far in the sentence, so that a vector farther along the list will include more word meanings.


  • https://research.googleblog.com/2016/09/a-neural-network-for-machine.html
    • QUOTE: The following visualization shows the progression of GNMT as it translates a Chinese sentence to English. First, the network encodes the Chinese words as a list of vectors, where each vector represents the meaning of all words read so far (“Encoder”). Once the entire sentence is read, the decoder begins, generating the English sentence one word at a time (“Decoder”). To generate the translated word at each step, the decoder pays attention to a weighted distribution over the encoded Chinese vectors most relevant to generate the English word (“Attention”; the blue link transparency represents how much the decoder pays attention to an encoded word).

2014