Google's WavenNet Technology Generates Speech That Matches Human Voice
Google’s DeepMind unit has created a system for machine-generated speech that it says outperforms existing technology by 50 percent. DeepMind says that the 'WaveNet' deep generative model of raw audio waveforms is able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems.
The same network can be used to synthesize other audio signals such as music.
In blind tests for U.S. English and Mandarin Chinese, human listeners found WaveNet-generated speech sounded more natural than that created with any of Google’s existing text-to-speech programs, which are based on different technologies. WaveNet still underperformed recordings of actual human speech.
The ability of computers to understand natural speech has been revolutionised in the last few years by the application of deep neural networks (e.g., Google Voice Search). However, generating speech with computers - a process usually referred to as speech synthesis or text-to-speech (TTS) - is still largely based on so-called concatenative TTS, where a very large database of short speech fragments are recorded from a single speaker and then recombined to form complete utterances. This makes it difficult to modify the voice (for example switching to a different speaker, or altering the emphasis or emotion of their speech) without recording a whole new database.
This has led to a great demand for parametric TTS, where all the information required to generate the data is stored in the parameters of the model, and the contents and characteristics of the speech can be controlled via the inputs to the model. So far, however, parametric TTS has tended to sound less natural than concatenative, at least for syllabic languages such as English. Existing parametric models typically generate audio signals by passing their outputs through signal processing algorithms known as vocoders.
DeepMind says that WaveNet changes this paradigm by directly modelling the raw waveform of the audio signal, one sample at a time. As well as yielding more natural-sounding speech, using raw waveforms means that WaveNet can model any kind of audio, including music.
Google's company has described how WavenNet works in this paper.