V-LASIK:

Consistent Glasses-Removal from Videos Using Synthetic Data

Our method receives an input video of a person wearing glasses, and consistently removes the glasses, while preserving the original content and identity of that person. As demonstrated in the examples above, our method successfully removes the glasses even when there are eye blinks, heavy makeup, and reflections. More examples are presented below.

Abstract

Diffusion-based generative models have recently shown remarkable image and video editing capabilities. However, local video editing, particularly removal of small attributes like glasses, remains a challenge. Existing methods either alter the videos excessively, generate unrealistic artifacts, or fail to perform the requested edit consistently throughout the video. In this work, we focus on consistent and identity-preserving removal of glasses in videos, using it as a case study for consistent local attribute removal in videos. Due to the lack of paired data, we adopt a weakly supervised approach and generate synthetic imperfect data, using an adjusted pretrained diffusion model. We show that despite data imperfection, by learning from our generated data and leveraging the prior of pretrained diffusion models, our model is able to perform the desired edit consistently while preserving the original video content. Furthermore, we exemplify the generalization ability of our method to other local video editing tasks by applying it successfully to facial sticker-removal. Our approach demonstrates significant improvement over existing methods, showcasing the potential of leveraging synthetic data and strong video priors for local video editing tasks.

Pipeline

Method overview:

Step 1: we create an imperfect synthetic paired dataset by generating glasses masks for each video frame and inpainting it. We inpaint each frame using an adjusted version ControlNet inpaint. We replace the self-attention layers with cross-frame attention (cf attn) and use blending between the generated latent images and the noised masked original latent images at each diffusion step. The generated data in the first step is imperfect; e.g. in the middle frame, the person blinks, however its generated pair has open eyes. Nevertheless, the data is good enough for finetuning an image-to-image diffusion model and achieving satisfactory results, due to the strong prior of the model.

Step 2: Given our trained model for the task of removing glasses from images, we incorporate it with a motion prior module to generate temporally consistent videos without glasses from previously unseen videos. To obtain the original frame colors, at each diffusion step we blend the generated frames with the noised original masked latent images, and before decoding, we apply an Inside-Out Normalization (ION), to better align the statistics within the masked area and the area outside of the mask.

Comparisons

We compare our method with different video editing and inpainting methods. When given an input video, some methods remove the glasses from it, however they either generate deformations and unrealistic artifacts, or do not preserve the identity, eyelids positions, or temporal consistency of the video. Other methods simply do not remove the glasses at all. In contrast, our method successfully removes the glasses while preserving the identity, realism and temporal consistency of the video.

Original

TokenFlow

RAVE

FGT

ProPainter

Ours

Original

TokenFlow

RAVE

FGT

ProPainter

Ours

Original

TokenFlow

RAVE

FGT

ProPainter

Ours

More glasses-removal results

Stickers

To show our method works for a different use-case, we generate a synthetic dataset of videos with stickers on faces, imitating real facial stickers, tattoos or synthetic features added by social-media apps, and show that our method is able to remove them. As shown in the results below, our model is able to seamlessly remove the stickers from the videos after being trained on our synthetic data.

left: input from our synthetic dataset, right: output of our model.