IIIT Hyderabad Publications |
|||||||||
|
Laban based Semantic Annotation and a Digital Archive Platform for Dance VideosAuthor: Swati Dewan Date: 2021-12-24 Report no: IIIT/TH/2021/129 Advisor:Anoop M Namboodiri AbstractIn this work, we propose Spatio-Temporal Laban Features: a novel motion descriptor based on Laban Theory and provide a platform to document, analyse and annotate performed dance. Dance is an indispensable part of our life. It has accompanied social and religious events, ceremonies, rituals, celebrations, etc throughout our history. However, dance evolves, it is very susceptible to creative changes by practitioners. Being a part of our cultural heritage, especially classical and folk dance, it needs to be preserved to track it’s evolution through time. With the boom in popularity of media platforms, multimedia content has permeated virtually every aspect of our lives. The volume of usergenerated content on the web has skyrocketed, and unlike before, dance is also leaving behind more trail than ever. However, the majority of such content is undocumented primarily due to high costs associated with the time and the resources required annotate such data manually or through sensors. We leverage the advancements in Machine Learning to reduce these costs and automate motion indexing. We create a semantically searchable dance database with support for Labanotation generation, automatic annotation, and retrieval. Our approach is built on a thorough review of Laban Theory which is a very popular annotation system based on movement analysis. We apply a pose estimation module to retrieve pose and generate Labanotation over recorded videos. Labanotation provides us with a comprehensive annotation system that we use to generate our novel motion descriptor and it can be further exploited to build an ontology as well. It is also very relevant for the preservation and digitization of online resources, especially for dance cultural heritage, which has been our focus in this work. We built a semi-automatic annotation model that generates semantic annotations over a big video database. We combine two publicly available ballet datasets and test our model on it. High-level concepts such as ballet pose and steps are used to make the semantic library. These also act as descriptive meta-tags for any ballet video making the videos retrievable using a semantic text or even a video query. The final platform thus allows a user to generate and study the form of any dance sequence using Labanotation as well as search for particular steps to learn from a video database. Full thesis: pdf Centre for Exact Humanities |
||||||||
Copyright © 2009 - IIIT Hyderabad. All Rights Reserved. |