IIIT Hyderabad Publications |
|||||||||
|
Collaborative Contributions for Better AnnotationsAuthors: Priyam Bakliwal,Guru Prasad Hegde, C V Jawahar Conference: 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP, VISIGRAPP-2017 2017) Location Porto, Portugal Date: 2017-02-27 Report no: IIIT/TR/2017/77 AbstractWe propose an active learning based solution for efficient, scalable and accurate annotations of objects in video sequences. Recent computer vision solutions use machine learning. Effectiveness of these solutions relies on the amount of available annotated data which again depends on the generation of huge amount of accurately annotated data. In this paper, we focus on reducing the human annotation efforts with simultaneous increase in tracking accuracy to get precise, tight bounding boxes around an object of interest. We use a novel combination of two different tracking algorithms to track an object in the whole video sequence. We propose a sampling strategy to sample the most informative frame which is given for human annotation. This newly annotated frame is used to update the previous annotations. Thus, by collaborative efforts of both human and the system we obtain accurate annotations with minimal effort. Using the proposed method, user efforts can be reduced to half without compromising on the annotation accuracy. We have quantitatively and qualitatively validated the results on eight different datasets. Full paper: pdf Centre for Visual Information Technology |
||||||||
Copyright © 2009 - IIIT Hyderabad. All Rights Reserved. |