Please see here (https://arxiv.org/abs/2010.14168).
Please see here.
For detailed information and download of the MAVC100, please see here.
Here are the detection results based on rule-embedded audio-visual VAD network in the paper.
The font on the top left of the video shows the activity of the anchor at the current moment. The anchor speaks,
it shows speech; the anchor sings, it shows singing; the anchor has no action and there is sound in the background, it shows silence; otherwise it shows others.
(For the purpose of protecting privacy, we covered the face of the anchor in the video clip and interfered with her or his voice.)
The left part is audio branch (red words) that tries to learn the high-level acoustic features of target events in audio level, and right part is image branch (blue words) attempts to judge whether the anchor is vocalizing using visual information. The bottom part is the Audio-Visual branch (purple italics), which aims to fuse the bi-modal representations to determine the probability of target events of this paper.
In subgraph (a), the red, blue, gray and green lines denote the probability of Singing, Speech, Others and Silence in audio, respectively.
In subgraph (b), the gray and black lines denote the probability of vocalizing and non-vocalizing, respectively.
In subgraph (c), the red, blue and gray lines denote the probability of target Singing, Speech and Others, and the other remaining part is Silence.