Detail of Publication
Text Language | English |
---|---|
Authors | Rina Buoy, Masakazu Iwamura, Sovila Srun, Koichi Kise |
Title | Explainable Connectionist-Temporal-Classification-Based Scene Text Recognition |
Journal | Journal of Imaging |
Vol. | 9 |
No. | 11 |
Number of Pages | 20 pages |
Publisher | MDPI |
Reviewed or not | Reviewed |
Month & Year | November 2023 |
Abstract | Connectionist temporal classification (CTC) is a favored decoder in scene text recognition (STR) for its simplicity and efficiency. However, most CTC-based methods utilize one-dimensional (1D) vector sequences, usually derived from a recurrent neural network (RNN) encoder. This results in the absence of explainable 2D spatial relationship between the predicted characters and corresponding image regions, essential for model explainability. On the other hand, 2D attention-based methods enhance recognition accuracy and offer character location information via cross-attention mechanisms, linking predictions to image regions. However, these methods are more computationally intensive, compared with the 1D CTC-based methods. To achieve both low latency and model explainability via character localization using a 1D CTC decoder, we propose a marginalization-based method that processes 2D feature maps and predicts a sequence of 2D joint probability distributions over the height and class dimensions. Based on the proposed method, we newly introduce an association map that aids in character localization and model prediction explanation. This map parallels the role of a cross-attention map, as seen in computationally-intensive attention-based architectures. With the proposed method, we consider a ViT-CTC STR architecture that uses a 1D CTC decoder and a pretrained vision Transformer (ViT) as a 2D feature extractor. Our ViT-CTC models were trained on synthetic data and fine-tuned on real labeled sets. These models outperform the recent state-of-the-art (SOTA) CTC-based methods on benchmarks in terms of recognition accuracy. Compared with the baseline Transformer-decoder-based models, our ViT-CTC models offer a speed boost up to 12 times regardless of the backbone, with a maximum 3.1% reduction in total word recognition accuracy. In addition, both qualitative and quantitative assessments of character locations estimated from the association map align closely with those from the cross-attention map and ground-truth character-level bounding boxes. |
DOI | 10.3390/jimaging9110248 |
- Entry for BibTeX
@Article{Buoy2023, author = {Rina Buoy and Masakazu Iwamura and Sovila Srun and Koichi Kise}, title = {Explainable Connectionist-Temporal-Classification-Based Scene Text Recognition}, journal = {Journal of Imaging}, year = 2023, month = nov, volume = {9}, number = {11}, numpages = {20}, DOI = {10.3390/jimaging9110248}, publisher = {MDPI} }