Detail of Publication
Text Language | English |
---|---|
Authors | Yusuke Oguma and Koichi Kise |
Title | Media-Independent Stamp-Based Document Annotation Using DocumentImage Retrieval |
Journal | Proc. of the 1st International Workshop on Visual Recognition and Retrieval for Mixed and Augmented Reality |
Number of Pages | 4 pages |
Location | Fukuoka, Japan |
Reviewed or not | Reviewed |
Presentation type | Oral |
Month & Year | October 2015 |
Abstract | In recent years, electronic documents have become popular. One of the advantages of electronic documents is that they provide a method of putting and sharing annotations on documents. However the market size of paper documents is still much larger than electronic documents and we continue to use them. We consider that it is better to have a method of annotation applicable not only to electronic documents but also to paper documents. In this paper we propose a method of annotating both electronic and paper documents by capturing them as images. We use a smartphone as a device and make the method work in real time. As a way of annotation, we propose to use ”stamps”, which are pictorial icons representing opinions of readers (like, dislike, difficult, interesting, etc.). This helps readers to put annotations more easily as compared to text-based annotations. |
- Entry for BibTeX
@InProceedings{Kise2015, author = {Yusuke Oguma and Koichi Kise}, title = {Media-Independent Stamp-Based Document Annotation Using DocumentImage Retrieval}, booktitle = {Proc. of the 1st International Workshop on Visual Recognition and Retrieval for Mixed and Augmented Reality}, year = 2015, month = oct, numpages = {4}, location = {Fukuoka, Japan} }