|

Netflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and All

Video enhancing has all the time had a unclean secret: eradicating an object from footage is straightforward; making the scene appear like it was by no means there may be brutally laborious. Take out an individual holding a guitar, and you’re left with a floating instrument that defies gravity. Hollywood VFX groups spend weeks fixing precisely this type of drawback. A staff of researchers from Netflix and INSAIT, Sofia University ‘St. Kliment Ohridski,’ launched VOID (Video Object and Interaction Deletion) mannequin that may do it routinely.

VOID removes objects from movies together with all interactions they induce on the scene — not simply secondary results like shadows and reflections, however bodily interactions like objects falling when an individual is eliminated.

What Problem Is VOID Actually Solving?

Standard video inpainting fashions — the type utilized in most enhancing workflows as we speak — are educated to fill within the pixel area the place an object was. They’re primarily very subtle background painters. What they don’t do is purpose about causality: if I take away an actor who’s holding a prop, what ought to occur to that prop?

Existing video object elimination strategies excel at inpainting content material ‘behind’ the item and correcting appearance-level artifacts resembling shadows and reflections. However, when the eliminated object has extra important interactions, resembling collisions with different objects, present fashions fail to appropriate them and produce implausible outcomes.

VOID is constructed on prime of CogVideoX and fine-tuned for video inpainting with interaction-aware masks conditioning. The key innovation is in how the mannequin understands the scene — not simply ‘what pixels ought to I fill?’ however ‘what’s bodily believable after this object disappears?’

The canonical instance from the analysis paper: if an individual holding a guitar is eliminated, VOID additionally removes the particular person’s impact on the guitar — inflicting it to fall naturally. That’s not trivial. The mannequin has to grasp that the guitar was being supported by the particular person, and that eradicating the particular person means gravity takes over.

And not like prior work, VOID was evaluated head-to-head towards actual opponents. Experiments on each artificial and actual information present that the method higher preserves constant scene dynamics after object elimination in comparison with prior video object elimination strategies together with ProPainter, DiffuEraser, Runway, MiniMax-Remover, ROSE, and Gen-Omnimatte.

https://arxiv.org/pdf/2604.02296

The Architecture: CogVideoX Under the Hood

VOID is constructed on CogVideoX-Fun-V1.5-5b-InP — a mannequin from Alibaba PAI — and fine-tuned for video inpainting with interaction-aware quadmask conditioning. CogVideoX is a 3D Transformer-based video era mannequin. Think of it like a video model of Stable Diffusion — a diffusion mannequin that operates over temporal sequences of frames relatively than single pictures. The particular base mannequin (CogVideoX-Fun-V1.5-5b-InP) is launched by Alibaba PAI on Hugging Face, which is the checkpoint engineers might want to obtain individually earlier than working VOID.

The fine-tuned structure specs: a CogVideoX 3D Transformer with 5B parameters, taking video, quadmask, and a textual content immediate describing the scene after elimination as enter, working at a default decision of 384×672, processing a most of 197 frames, utilizing the DDIM scheduler, and working in BF16 with FP8 quantization for reminiscence effectivity.

The quadmask is arguably essentially the most fascinating technical contribution right here. Rather than a binary masks (take away this pixel / preserve this pixel), the quadmask is a 4-value masks that encodes the first object to take away, overlap areas, affected areas (falling objects, displaced gadgets), and background to maintain.

In follow, every pixel within the masks will get one in every of 4 values: 0 (main object being eliminated), 63 (overlap between main and affected areas), 127 (interaction-affected area — issues that can transfer or change because of the elimination), and 255 (background, preserve as-is). This provides the mannequin a structured semantic map of what’s occurring within the scene, not simply the place the item is.

Two-Pass Inference Pipeline

VOID makes use of two transformer checkpoints, educated sequentially. You can run inference with Pass 1 alone or chain each passes for larger temporal consistency.

Pass 1 (void_pass1.safetensors) is the bottom inpainting mannequin and is adequate for many movies. Pass 2 serves a particular function: correcting a recognized failure mode. If the mannequin detects object morphing — a recognized failure mode of smaller video diffusion fashions — an non-obligatory second move re-runs inference utilizing flow-warped noise derived from the primary move, stabilizing object form alongside the newly synthesized trajectories.

It’s value understanding the excellence: Pass 2 isn’t only for longer clips — it’s particularly a shape-stability repair. When the diffusion mannequin produces objects that step by step warp or deform throughout frames (a well-documented artifact in video diffusion), Pass 2 makes use of optical move to warp the latents from Pass 1 and feeds them as initialization right into a second diffusion run, anchoring the form of synthesized objects frame-to-frame.

How the Training Data Was Generated

This is the place issues get genuinely fascinating. Training a mannequin to grasp bodily interactions requires paired movies — the identical scene, with and with out the item, the place the physics performs out appropriately in each. Real-world paired information at this scale doesn’t exist. So the staff constructed it synthetically.

Training used paired counterfactual movies generated from two sources: HUMOTO — human-object interactions rendered in Blender with physics simulation — and Kubric — object-only interactions utilizing Google Scanned Objects.

HUMOTO makes use of motion-capture information of human-object interactions. The key mechanic is a Blender re-simulation: the scene is about up with a human and objects, rendered as soon as with the human current, then the human is faraway from the simulation and physics is re-run ahead from that time. The result’s a bodily appropriate counterfactual — objects that have been being held or supported now fall, precisely as they need to. Kubric, developed by Google Research, applies the identical concept to object-object collisions. Together, they produce a dataset of paired movies the place the physics is provably appropriate, not approximated by a human annotator.

Key Takeaways

  • VOID goes past pixel-filling. Unlike current video inpainting instruments that solely appropriate visible artifacts like shadows and reflections, VOID understands bodily causality — when you take away an individual holding an object, the item falls naturally within the output video.
  • The quadmask is the core innovation. Instead of a easy binary take away/preserve masks, VOID makes use of a 4-value quadmask (values 0, 63, 127, 255) that encodes not simply what to take away, however which surrounding areas of the scene might be bodily affected — giving the diffusion mannequin structured scene understanding to work with.
  • Two-pass inference solves an actual failure mode. Pass 1 handles most movies; Pass 2 exists particularly to repair object morphing artifacts — a recognized weak spot of video diffusion fashions — by utilizing optical flow-warped latents from Pass 1 as initialization for a second diffusion run.
  • Synthetic paired information made coaching doable. Since real-world paired counterfactual video information doesn’t exist at scale, the analysis staff constructed it utilizing Blender physics re-simulation (HUMOTO) and Google’s Kubric framework, producing ground-truth earlier than/after video pairs the place the physics is provably appropriate.

Check out the Paper, Model Weight and Repo.  Also, be at liberty to comply with us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The submit Netflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and All appeared first on MarkTechPost.

Similar Posts