谷歌近日開源了一個(gè)強(qiáng)大的NLP深度學(xué)習(xí)框架Lingvo,側(cè)重于語言相關(guān)任務(wù)的序列模型,如機(jī)器翻譯、語音識(shí)別和語音合成。過去兩年來,谷歌已經(jīng)發(fā)表了幾十篇使用Lingvo獲得SOTA結(jié)果的論文。
近日,谷歌開源了一個(gè)內(nèi)部 NLP 的秘密武器 ——Lingvo。
這是一個(gè)強(qiáng)大的 NLP 框架,已經(jīng)在谷歌數(shù)十篇論文的許多任務(wù)中實(shí)現(xiàn) SOTA 性能!
Lingvo 在世界語中意為 “語言”。這個(gè)命名暗指了 Lingvo 框架的根源 ——它是使用 TensorFlow 開發(fā)的一個(gè)通用深度學(xué)習(xí)框架,側(cè)重于語言相關(guān)任務(wù)的序列模型,如機(jī)器翻譯、語音識(shí)別和語音合成。
Lingvo 框架在谷歌內(nèi)部已經(jīng)獲得青睞,使用它的研究人員數(shù)量激增。過去兩年來,谷歌已經(jīng)發(fā)表了幾十篇使用 Lingvo 獲得 SOTA 結(jié)果的論文,未來還會(huì)有更多。
包括 2016 年機(jī)器翻譯領(lǐng)域里程碑式的《谷歌神經(jīng)機(jī)器翻譯系統(tǒng)》論文 (Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation),也是使用 Lingvo。該研究開啟了機(jī)器翻譯的新篇章,宣告機(jī)器翻譯正式從 IBM 的統(tǒng)計(jì)機(jī)器翻譯模型 (PBMT,基于短語的機(jī)器翻譯),過渡到了神經(jīng)網(wǎng)絡(luò)機(jī)器翻譯模型。該系統(tǒng)使得機(jī)器翻譯誤差降低了 55%-85% 以上,極大地接近了普通人的翻譯水平。
除了機(jī)器翻譯之外,Lingvo 框架也被用于語音識(shí)別、語言理解、語音合成、語音 - 文本轉(zhuǎn)寫等任務(wù)。
谷歌列舉了 26 篇使用 Lingvo 框架的 NLP 論文,發(fā)表于 ACL、EMNLP、ICASSP 等領(lǐng)域頂會(huì),取得多個(gè) SOTA 結(jié)果。全部論文見文末列表。
Lingvo 支持的架構(gòu)包括傳統(tǒng)的RNN 序列模型、Transformer 模型以及包含 VAE 組件的模型,等等。
谷歌表示:“為了表明我們對(duì)研究界的支持并鼓勵(lì)可重復(fù)的研究工作,我們公開了該框架的源代碼,并開始發(fā)布我們論文中使用的模型。”
此外,谷歌還發(fā)布了一篇概述 Lingvo 設(shè)計(jì)的論文,并介紹了框架的各個(gè)部分,同時(shí)提供了展示框架功能的高級(jí)特性的示例。
相關(guān)論文:
https://arxiv.org/pdf/1902.08295.pdf
強(qiáng)悍的貢獻(xiàn)者列表 ——91 位作者!
摘要
Lingvo 是一個(gè) Tensorflow 框架,為協(xié)作式深度學(xué)習(xí)研究提供了一個(gè)完整的解決方案,特別側(cè)重于sequence-to-sequence模型。Lingvo 模型由靈活且易于擴(kuò)展的模塊化構(gòu)建塊組成,實(shí)驗(yàn)配置集中且高度可定制。該框架直接支持分布式訓(xùn)練和量化推理,包含大量實(shí)用工具、輔助函數(shù)和最新研究思想的現(xiàn)有實(shí)現(xiàn)。論文概述了 Lingvo 的基礎(chǔ)設(shè)計(jì),并介紹了框架的各個(gè)部分,同時(shí)提供了展示框架功能的高級(jí)特性的示例。
為協(xié)作研究設(shè)計(jì)、靈活、快速
Lingvo 框架概覽:概述了如何實(shí)例化、訓(xùn)練和導(dǎo)出模型以進(jìn)行評(píng)估和服務(wù)。
Lingvo 是在考慮協(xié)作研究的基礎(chǔ)下構(gòu)建的,它通過在不同任務(wù)之間共享公共層的實(shí)現(xiàn)來促進(jìn)代碼重用。此外,所有層都實(shí)現(xiàn)相同的公共接口,并以相同的方式布局。這不僅可以生成更清晰、更易于理解的代碼,還可以非常簡(jiǎn)單地將其他人為其他任務(wù)所做的改進(jìn)應(yīng)用到自己的任務(wù)中。強(qiáng)制實(shí)現(xiàn)這種一致性的代價(jià)是需要更多的規(guī)則和樣板,但是 Lingvo 試圖將其最小化,以確保研究期間的快速迭代時(shí)間。
協(xié)作的另一個(gè)方面是共享可重現(xiàn)的結(jié)果。Lingvo 為檢入模型超參數(shù)配置提供了一個(gè)集中的位置。這不僅可以記錄重要的實(shí)驗(yàn),還可以通過訓(xùn)練相同的模型,為其他人提供一種簡(jiǎn)單的方法來重現(xiàn)你的結(jié)果。
Lingvo 中的任務(wù)配置示例。每個(gè)實(shí)驗(yàn)的超參數(shù)都在它自己的類中配置,與構(gòu)建網(wǎng)絡(luò)的代碼分開,并檢入版本控制。
雖然 Lingvo 最初的重點(diǎn)是 NLP,但它本質(zhì)上非常靈活,并且研究人員已經(jīng)使用該框架成功地實(shí)現(xiàn)了圖像分割和點(diǎn)云分類等任務(wù)的模型。它還支持 Distillation、GANs 和多任務(wù)模型。
同時(shí),該框架不犧牲速度,并且具有優(yōu)化的輸入 pipeline 和快速分布式訓(xùn)練。
最后,Lingvo 的目的是實(shí)現(xiàn)簡(jiǎn)單生產(chǎn),甚至有一條明確定義的為移動(dòng)推理移植模型的路徑。
使用Lingvo的已發(fā)表論文列表
Translation:
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation.Mia X. Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. ACL 2018.
Revisiting Character-Based Neural Machine Translation with Capacity and Compression.Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. EMNLP 2018.
Training Deeper Neural Machine Translation Models with Transparent Attention.Ankur Bapna, Mia X. Chen, Orhan Firat, Yuan Cao and Yonghui Wu. EMNLP 2018.
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ?ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Technical Report, 2016.
Speech Recognition:
A comparison of techniques for language model integration in encoder-decoder speech recognition.Shubham Toshniwal, Anjuli Kannan, Chung-Cheng Chiu, Yonghui Wu, Tara N. Sainath, Karen Livescu. IEEE SLT 2018.
Deep Context: End-to-End Contextual Speech Recognition.Golan Pundak, Tara N. Sainath, Rohit Prabhavalkar, Anjuli Kannan, Ding Zhao. IEEE SLT 2018.
Speech recognition for medical conversations.Chung-Cheng Chiu, Anshuman Tripathi, Katherine Chou, Chris Co, Navdeep Jaitly, Diana Jaunzeikare, Anjuli Kannan, Patrick Nguyen, Hasim Sak, Ananth Sankar, Justin Tansuwan, Nathan Wan, Yonghui Wu, and Xuedong Zhang. Interspeech 2018.
Compression of End-to-End Models.Ruoming Pang, Tara Sainath, Rohit Prabhavalkar, Suyog Gupta, Yonghui Wu, Shuyuan Zhang, and Chung-Cheng Chiu. Interspeech 2018.
Contextual Speech Recognition in End-to-End Neural Network Systems using Beam Search.Ian Williams, Anjuli Kannan, Petar Aleksic, David Rybach, and Tara N. Sainath. Interspeech 2018.
State-of-the-art Speech Recognition With Sequence-to-Sequence Models.Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, and Michiel Bacchiani. ICASSP 2018.
End-to-End Multilingual Speech Recognition using Encoder-Decoder Models.Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, and Kanishka Rao. ICASSP 2018.
Multi-Dialect Speech Recognition With a Single Sequence-to-Sequence Model.Bo Li, Tara N. Sainath, Khe Chai Sim, Michiel Bacchiani, Eugene Weinstein, Patrick Nguyen, Zhifeng Chen, Yonghui Wu, and Kanishka Rao. ICASSP 2018.
Improving the Performance of Online Neural Transducer Models.Tara N. Sainath, Chung-Cheng Chiu, Rohit Prabhavalkar, Anjuli Kannan, Yonghui Wu, Patrick Nguyen, and Zhifeng Chen. ICASSP 2018.
Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence Models.Rohit Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-Cheng Chiu, and Anjuli Kannan. ICASSP 2018.
No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica inEnd-to-End Models.Tara N. Sainath, Rohit Prabhavalkar, Shankar Kumar, Seungji Lee, Anjuli Kannan, David Rybach, Vlad Schogol, Patrick Nguyen, Bo Li, Yonghui Wu, Zhifeng Chen, and Chung-Cheng Chiu. ICASSP 2018.
Learning hard alignments with variational inference.Dieterich Lawson, Chung-Cheng Chiu, George Tucker, Colin Raffel, Kevin Swersky, and Navdeep Jaitly. ICASSP 2018.
Monotonic Chunkwise Attention.Chung-Cheng Chiu, and Colin Raffel. ICLR 2018.
An Analysis of Incorporating an External Language Model into a Sequence-to-Sequence Model.Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N. Sainath, Zhifeng Chen, and Rohit Prabhavalkar. ICASSP 2018.
Language understanding
Semi-Supervised Learning for Information Extraction from Dialogue.Anjuli Kannan, Kai Chen, Diana Jaunzeikare, and Alvin Rajkomar. Interspeech 2018.
CaLcs: Continuously Approximating Longest Common Subsequence for Sequence Level Optimization.Semih Yavuz, Chung-Cheng Chiu, Patrick Nguyen, and Yonghui Wu. EMNLP 2018.
Speech synthesis
Hierarchical Generative Modeling for Controllable Speech Synthesis.Wei-Ning Hsu, Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Yuxuan Wang, Yuan Cao, Ye Jia, Zhifeng Chen, Jonathan Shen, Patrick Nguyen, Ruoming Pang. Submitted to ICLR 2019.
Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis.Ye Jia, Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu. NIPS 2018.
Natural TTS Synthesis By Conditioning WaveNet On Mel Spectrogram Predictions.Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, Yonghui Wu. ICASSP 2018.
On Using Backpropagation for Speech Texture Generation and Voice Conversion.Jan Chorowski, Ron J. Weiss, Rif A. Saurous, Samy Bengio. ICASSP 2018.
Speech-to-text translation
Leveraging weakly supervised data to improve end-to-end speech-to-text translation.Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J. Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, Yonghui Wu. Submitted to ICASSP 2019.
Sequence-to-Sequence Models Can Directly Translate Foreign Speech.Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. Interspeech 2017.
https://github.com/tensorflow/lingvo/blob/master/PUBLICATIONS.md
開源地址:
https://github.com/tensorflow/lingvo
-
谷歌
+關(guān)注
關(guān)注
27文章
6161瀏覽量
105299 -
深度學(xué)習(xí)
+關(guān)注
關(guān)注
73文章
5500瀏覽量
121111 -
nlp
+關(guān)注
關(guān)注
1文章
488瀏覽量
22033
原文標(biāo)題:谷歌重磅開源NLP通用框架,20多篇最新論文都用了它
文章出處:【微信號(hào):AI_era,微信公眾號(hào):新智元】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。
發(fā)布評(píng)論請(qǐng)先 登錄
相關(guān)推薦
評(píng)論