ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement

Wei-Ning Hsu1, Tal Remez1, Bowen Shi1,3, Jacob Donley2, Yossi Adi1,4
1FAIR, Meta AI Research, 2Meta Reality Labs Research,
3Toyota Technological Institute at Chicago, 4The Hebrew University of Jerusalem
{wnhsu,talr,bshi,jdonley,adiyoss}@meta.com
[paper] [code]

Abstract

Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality.

Real-world noisy ego-centric recordings from EasyCom dataset.

EasyCom contains ego-centric videos samples recorded from glasses with an audio-array and a camera. The audio contains significant amount of background noise and overlapping speech. Hence the task of enhancement requires both denoising and separation. The following samples are drawn from the ReVISE model trained on EasyCom.
Input video (distant mic) Ref. video (close mic) Beamformed audio Beamformed audio + ReVISE (ours)








Video-to-speech synthesis with in-the-wild samples

We evaluate our universal ReVISE model trained on LRS3 with samples from the AV-HuBERT blog for video-to-speech synthesis. We present samples for: (1) the input (silent) video; (2) the target audio; (3) ReVISE model output. The model generalizes well to samples not drawn from the training dataset.
Input video (silent) Ref. video ReVISE (ours)
Input video (silent) Ref. video ReVISE (ours)
Input video (silent) Ref. video ReVISE (ours)

Audi-visual speech inpainting with in-the-wild samples

Similar to the section above, we evaluate our universal ReVISE model trained on LRS3 with samples from the AV-HuBERT blog for speech inpainting. We present samples for: (1) the input video with 30%/50%/70% of frames dropped for the three columns from left to right; (2) the target audio; (3) ReVISE model output. The model generalizes well to samples not drawn from the training dataset.
Input video (30% frames dropped) Ref. video ReVISE (ours)
Input video (50% frames dropped) Ref. video ReVISE (ours)
Input video (70% frames dropped) Ref. video ReVISE (ours)