MOVING platform: Videolectures.NET Chapters
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
VideoLectures.NET is part of the H2020 project MOVING, which has been working on developing new and more effective methods for lecture video fragmentation and fragment-level annotation, to allow for fine-grained access to lecture video collections. In the latest MOVING method, developed by CERTH (also a member, and coordinator of the MOVING consortium), automatically-generated speech transcripts of the lecture video are analysed with the help of word embeddings that are generated from pre-trained state-of-the-art neural networks. This lecture video fragmentation method is part of the MOVING platform, and its results are also ingested in the VideoLectures.NET platform, making it possible for the users of both platforms to access and view specific fragments of lecture videos that cater to their information needs.
The fragments are accessible for 8896 video lectures in VideoLectures.NET; see for instance the lecture on deep learning or the lecture about fighting the tuberculosis pandemic using ML. The fragments are presented as "chapters" to the right of the video player window, and can serve as a tool to find particular video parts easier and faster. Lectures at VideoLectures.NET are mostly from half an hour to an hour long, but if a user is interested only in a specific part of a lecture, the enabled fragments (chapters) will ease finding and consuming the desired parts of the learning materials. In the future, the fragments will be added to all lectures. To illustrate how the fragment-level information can be accessed and used, please watch the demo.
For technical details on the lecture fragmentation method, and for accessing a new artificially-generated dataset of synthetic video lecture transcripts that has been created and released by MOVING, based on real VideoLectures.NET data, see .
 D. Galanopoulos, V. Mezaris, "Temporal Lecture Video Fragmentation using Word Embeddings", Proc. 25th Int. Conf. on Multimedia Modeling (MMM2019), Thessaloniki, Greece, Springer LNCS vol. 11296, pp. 254-265, Jan. 2019.
Dataset available at https://github.com/bmezaris/lecture_video_fragmentation
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !