Microsoft Advances Conversation Transcription Using Virtual Microphone Arrays
Microsoft Research's 'Project Denmark' Technology you to use the microphones in phones and laptops to create a virtual array that can handle real-time conversation transcription and more.
Announced at Build 2019, the new Conversation Transcription capability, part of Microsoft's Azure Speech Service., allows real-time transcription of multi-user conversations with automatic speaker attribution.
While smart speakers are commercially available today, most of them can only handle a single person’s speech command one at a time and require a wake-up word before issuing such a command. The Azure Speech Service, available in preview today, is enhanced by the availability of audio-only or audio-visual microphone array devices via Microsoft's referenced Devices SDK (DDK).
The Conversation Transcription capability expands Microsoft’s existing Azure speech service to enable real-time, multi-person, far-field speech transcription and speaker attribution. Paired with a Speech DDK, Conversation Transcription can recognize conversational speech for a small group of people in a room and generate a transcription handling common yet challenging scenarios such as “cross-talk”.
Microsoft is engaging with selected customers and Systems Integration (SI) partners such as Accenture, Avanade and Roobo to customize and integrate the Conversation Transcription solution in US and China respectively.
The Conversation Transcription capability utilizes multi-channel data including audio and visual signals from a Speech DDK that is codenamed Princeton Tower. The edge device is based on Microsoft's reference-designed 360-degree audio microphone array or 360-degree fisheye camera with audio-visual fusion to support improved transcription. The edge device sends signals to Azure cloud for neural signal processing and speech recognition. Audio-only microphone array DDKs can be purchased from http://ddk.roobo.com. Advanced audio-visual microphone array DDKs are available from Microsoft's SI partners.
Microsoft's latest research progress (Project Denmark) enables dynamic creation of a virtual microphone array with a set of existing devices such as mobile phones or laptops equipped with an ordinary microphone. The virtual microphone array combines existing devices like mobile phones or laptops equipped with an ordinary microphone like Lego blocks into a single larger array dynamically. Project Denmark can potentially help Microsoft's customers more easily transcribe conversations anytime and anywhere using Azure speech services, with or without a dedicated microphone array DDK. Future application scenarios are broad. For example, Microsoft may pair up multiple Microsoft Translator applications to help multiple people communicate more effectively using mobile phones to minimize language barriers.
Accurate speech transcription is very difficult if the domain vocabulary such as acronyms is unavailable. To solve for this, Microsoft is extending Azure custom speech recognition capabilities and enabling organizations to create custom speech models using their Office 365 data. For Office 365 enterprise customers opting in for this service, Azure can automatically generate a custom model leveraging Office 365 data such as Contacts, Emails, and Documents in a completely eyes-off, secure and compliant fashion. This delivers more accurate speech transcription on organization-specific vernacular such as technical terms and people names.