Скачать презентацию TEXT EXTRACTION FROM IMAGES AND VIDEOS ΜΑΡΙΟΣ ΑΝΘΙΜΟΠΟΥΛΟΣ Скачать презентацию TEXT EXTRACTION FROM IMAGES AND VIDEOS ΜΑΡΙΟΣ ΑΝΘΙΜΟΠΟΥΛΟΣ

d288494adfd50947004156cc8c3d0e3f.ppt

  • Количество слайдов: 36

TEXT EXTRACTION FROM IMAGES AND VIDEOS ΜΑΡΙΟΣ ΑΝΘΙΜΟΠΟΥΛΟΣ Εργαστήριο υπολογιστικής ευφυΐας Ινστιτούτο πληροφορικής και TEXT EXTRACTION FROM IMAGES AND VIDEOS ΜΑΡΙΟΣ ΑΝΘΙΜΟΠΟΥΛΟΣ Εργαστήριο υπολογιστικής ευφυΐας Ινστιτούτο πληροφορικής και τηλεπικοινωνιών Institute of Informatics and Telecommunications – NCSR “Demokritos”

Outline § VOCR - overview o Text Detection o Text Tracking o Text Segmentation Outline § VOCR - overview o Text Detection o Text Tracking o Text Segmentation § Proposed methodology – Overview o Text areas detection o Text lines detection o Multiresolution analysis o Text Segmentation o Evaluation strategy o Experimental results o Publications o Future work Institute of Informatics and Telecommunications – NCSR “Demokritos”

VOCR overview § VOCR: A research area which attempts to develop a computer system VOCR overview § VOCR: A research area which attempts to develop a computer system with the ability to automatically read from videos the textual content visually embedded in complex backgrounds § Artificial (superimposed, graphic, caption, overlay) Text: artificially superimposed on images or video frames at the time of editing. Artificial text usually underscores or summarizes the video’s content. This makes artificial text particularly useful for building keyword indexes. Institute of Informatics and Telecommunications – NCSR “Demokritos”

VOCR overview § VOCR: A research area which attempts to develop a computer system VOCR overview § VOCR: A research area which attempts to develop a computer system with the ability to automatically read from videos the textual content visually embedded in complex backgrounds § Scene text: naturally occurs in the field of view of the camera during video capture. Scene text occurring on signs, banners, etc. may also give keywords that describe the content of a video sequence Institute of Informatics and Telecommunications – NCSR “Demokritos”

VOCR overview § Challenges for VOCR: – Lower resolution: video frames are typically captured VOCR overview § Challenges for VOCR: – Lower resolution: video frames are typically captured at resolutions of 320 × 240 or 640 × 480 pixels, while document images are typically digitized at resolutions of 300 dpi or greater – Unknown text color: text can have arbitrary and non-uniform color. – Unknown text size, position, orientation, layout: captions lack the structure usually associated with documents. – Unconstrained background: the background can have colors similar to the text color. The background may include streaks that appear very similar to character strokes. – Color bleeding: lossy video compression may cause colors to run together. – Low contrast: low bit-rate video compression cause loss of contrast between character strokes and the background. Institute of Informatics and Telecommunications – NCSR “Demokritos”

VOCR overview § Basic steps of a VOCR system: Video Text detection Spatial text VOCR overview § Basic steps of a VOCR system: Video Text detection Spatial text detection in every frame. a box for every text line Text tracking +enhancement Temporal text detection from frame to frame. Multi-frame integration for image enhancement an enhanced image for every text line Text segmentation Binarization and resolution enhancement a b/w image for every text line Text recognition ASCII characters for every text line Text Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Detection Text detection generally can be classified into two categories: Bottom-up methods: they Text Detection Text detection generally can be classified into two categories: Bottom-up methods: they segment images into regions and group “character” regions into words. The methods, to some degree, can avoid performing text detection. Due to the difficulty of developing efficient segmentation algorithms for text in complex background, the methods are not robust for detecting text in many camerabased images and videos. Top-down methods: they first detect text regions in images using filters and then perform bottom-up techniques inside the text regions. These methods are able to process more complex images than bottom-up approaches. Top-down methods are also divided into two categories: - Heuristic methods: they use heuristic filters - Machine learning methods: they use trained filters Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Detection Bottom-up method: Lienhart et al. [1] regard text regions as CCs with Text Detection Bottom-up method: Lienhart et al. [1] regard text regions as CCs with the same or similar color and size, and apply motion analysis to enhance the text extraction results for a video sequence. The input image is segmented based on the monochromatic nature of the text components using a split-and-merge algorithm. Segments that are too small and too large are filtered out. After dilation, motion information and contrast analysis are used to enhance the extracted results. [1] Rainer Lienhart and Frank Stuber, «Automatic text recognition in digital videos» , Technical Report / Department for Mathematics and Computer Science, University of Mannheim ; TR-1995 -036 Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Detection Top-down methods: Du et al. [2] use the multistage pulse code modulation Text Detection Top-down methods: Du et al. [2] use the multistage pulse code modulation (MPCM) to locate potential text regions in colour video images. A sequence of spatial filters is applied to remove noisy regions. [2] Du, Yingzi, Chang, Chein-I Thouin, Paul D. “Automated system for text detection in individual video Images”, Journal of Electronic Imaging, 12(3), 410 - 422. 2003. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Detection Top-down methods: Zhong et al. [3] use the DCT coefficients of compressed Text Detection Top-down methods: Zhong et al. [3] use the DCT coefficients of compressed jpeg or mpeg files in order to distinguish the texture of textual regions from non-textual regions. [3] Yu Zhong, Hong. Jiang Zhang, Anil K. Jain, Automatic Caption Localization in Compressed Video, IEEE Trans. Pattern Analysis Machine Intelligence, 22(4): 385 -392 (2000) Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Detection Top-down methods: Lienhart et al. [4] use gradient features fed to a Text Detection Top-down methods: Lienhart et al. [4] use gradient features fed to a neural network. For each 20 x 10 pixel window at each scale, its confidence value for text is added to the saliency map S which is finally binarized. [4] Rainer Lienhart and Axel Wernicke, « Localizing and Segmenting Text in Images and Videos» , IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 12, NO. 4 (2002) Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Tracking Text tracking: Temporal detection of text in video sequences Every-frame detection: All Text Tracking Text tracking: Temporal detection of text in video sequences Every-frame detection: All pairs of bounding boxes with non-zero overlap must have: - The difference in size, below a certain threshold. - The difference in position, below a certain threshold - The size of the overlap area, higher than a certain threshold. Periodical detection: A box in frame t is recognized in frame t+k, moved by a vector (Δx, Δy) if Cor is lower than a threshold Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Tracking Image Enhancement - Multiframe Integration: If Fi i=1, …, T are the Text Tracking Image Enhancement - Multiframe Integration: If Fi i=1, …, T are the tracked boxes of T different frames the final image will be: Institute of Informatics and Telecommunications – NCSR “Demokritos”

Text Segmentation Global binarization : - Otsu method: The most effective global thresholding Local Text Segmentation Global binarization : - Otsu method: The most effective global thresholding Local binarization : - Niblack: very fast, proved effective for VOCR Uses a shifting window covering at least 1 -2 characters Applies a threshold T=m+k*s where k= -0. 2 m=mean, s=standard deviation. Problems with areas with no text - Sauvola: solves problem, assumes that text is dark in bright background T=m*(1 -k*(1 -S/R)) where R=128, k=0. 5 Problems when the hypothesis is not true (even after reversing) Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Overview § The proposed methodology exploits the fact that text lines Proposed methodology - Overview § The proposed methodology exploits the fact that text lines produce strong vertical edges horizontally aligned and follow specific shape restrictions. Using edges as the prominent feature of our system gives us the opportunity to detect characters with different fonts and colors since every character present strong edges, despite its font or color, in order to be readable. § The whole algorithm is applied in a multiresolution fashion to ensure text detection with a size variability. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Overview § Flowchart of the text detection methodology : • Map Proposed methodology - Overview § Flowchart of the text detection methodology : • Map generation : The edge map is generated using the Canny edge detector. • Dilation : A dilation by a cross-shaped element 5 x 21 is performed to connect the character contours of every text line. • Opening : A morphological opening is used to remove the noise and smooth the shape of the candidate text areas. The element used here is also cross-shaped with size 11 x 45. • Projections analysis : Edge projections are computed, and rows or columns with values under a threshold are discarded. Boxes with more than one text line are divided and some noisy areas are eliminated. • Scale integration : The methodology described above is applied to the image in different scales and finally the results are fused to the initial scale. Institute of Informatics and Telecommunications – NCSR “Demokritos”

§ Flowchart of the text detection methodology : ü ü ü Institute of Informatics § Flowchart of the text detection methodology : ü ü ü Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text areas detection § We use Canny edge detector applied in Proposed methodology - Text areas detection § We use Canny edge detector applied in greyscale images. Canny uses Sobel masks in order to find the edge magnitude of the image, in gray scale, and then uses non-Maxima suppression and hysteresis thresholding. With these two post-processing operations Canny edge detector manage to remove nonmaxima pixels, preserving the connectivity of the contours. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text areas detection § After computing the Canny edge map, a Proposed methodology - Text areas detection § After computing the Canny edge map, a dilation by an element 5 x 21 is performed to connect the character contours of every text line. Experiments showed that a cross-shaped element has better results. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text areas detection § Then a morphological opening is used, removing Proposed methodology - Text areas detection § Then a morphological opening is used, removing the noise and smoothing the shape of the candidate text areas. The element used here is also cross-shaped with size 11 x 45. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text areas detection § § Every component with height less than Proposed methodology - Text areas detection § § Every component with height less than 11 or width less than 45 is suppressed. A connected component analysis help us to compute the initial bounding boxes of the candidate text areas. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text lines detection § To increase the precision and reject the Proposed methodology - Text lines detection § To increase the precision and reject the false alarms we use a method based on horizontal and vertical projections. § the horizontal edge projection of every box is computed and lines with projection values below a threshold are discarded. § Boxes with more than one text line are divided and some lines with noise are also discarded Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text lines detection § Boxes which do not contain text are Proposed methodology - Text lines detection § Boxes which do not contain text are usually split in a number of boxes with very small height and discarded by a next stage due to geometrical constraints. A box is discarded if: Ø Height is lower than a threshold (set to 12) Ø Height is greater than a threshold (set to 48) Ø Ratio width/ height is lower than a threshold (set to 1. 5) Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text lines detection § Then, a similar procedure with vertical projection Proposed methodology - Text lines detection § Then, a similar procedure with vertical projection follows § The vertically divided parts remain connected if the distance between them is less than a threshold which depends on the height of the candidate text line (set to 1. 5*height) Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text lines detection § The whole procedure with horizontal and vertical Proposed methodology - Text lines detection § The whole procedure with horizontal and vertical projections is repeated three times in order to segment even the most complicated text areas and results to the final bounding boxes Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Multiresolution analysis § The size of the elements for the morphological Proposed methodology - Multiresolution analysis § The size of the elements for the morphological operations and the geometrical constraints give to the algorithm the ability to detect text in a specific range of character sizes (12 -48 pixels). § To overcome this problem we adopt a multiresolution approach. The algorithm described above is applied to the image in different resolutions and finally the results are fused to the initial resolution. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Multiresolution analysis Fine resolution Coarse resolution Ø We chose to use Proposed methodology - Multiresolution analysis Fine resolution Coarse resolution Ø We chose to use two resolutions for this approach: the initial, and the one with a scale factor of 0. 6. In this way the system can detect characters with height up to 80 pixels which was considered to be satisfying. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text Segmentation Text segmentation: Produce a b/w image for every detected Proposed methodology - Text Segmentation Text segmentation: Produce a b/w image for every detected text block. Normal Text We calculate the mean intensity value inside and outside the yellow box and compare the two values to decide about normal or inverse text Invert Institute of Informatics and Telecommunications – NCSR “Demokritos” Inverse Text

Proposed methodology - Text Segmentation We based text segmentation on the adaptive binarization technique Proposed methodology - Text Segmentation We based text segmentation on the adaptive binarization technique [5]: [5] B. Gatos, I. Pratikakis and S. J. Perantonis, "Adaptive Degraded Document Image Binarization", Pattern Recognition, Vol. 39, pp. 317 -327, 2006. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Text Segmentation Original image Gray scale image First draft binarization Background Proposed methodology - Text Segmentation Original image Gray scale image First draft binarization Background surface Final binary image Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Evaluation strategy – A text line must have influence to the Proposed methodology - Evaluation strategy – A text line must have influence to the final evaluation measure proportional to the number of containing characters and not to the number of its pixels – The number of characters in a box cannot be defined by the algorithm but it can be approximated by the ratio width/height of the bounding box Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Evaluation strategy § Recognition : – Edit Distance is used in Proposed methodology - Evaluation strategy § Recognition : – Edit Distance is used in order to compare the detected and the correct text. We calculate the minimum number of character insertions, deletions and replacements in order to correct the resulting text. – We normalize the edit distance from 0 to 100. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Experimental results § § Corpus : 3 sets of video frames Proposed methodology - Experimental results § § Corpus : 3 sets of video frames (720 x 480) have been used, captured from TRECVID 2005 and 2006 (http: //wwwnlpir. nist. gov/projects/trecvid/) Set 1 contains text in many different sizes as well as some scene text Set 2 contains images with very large fonts and also some scene text Set 3 contains artificial text with small fonts Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Experimental results Institute of Informatics and Telecommunications – NCSR “Demokritos” Proposed methodology - Experimental results Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Publications § § § M. Anthimopoulos, M. Gatos, I. Pratikakis Proposed methodology - Publications § § § M. Anthimopoulos, M. Gatos, I. Pratikakis "Multiresolution text detection in video frames“, Second international conference on computer vision theory and applications (VISAPP). Barcelona, Spain March 8 -11, 2007 M. Anthimopoulos, M. Gatos, I. Pratikakis, S. J. Perantonis “Detecting text in video frames”, The Fourth IASTED International Conference on Signal Processing, Pattern Recognition, and Applications (SPPRA), Innsbruck, Austria, February 2007. M. Anthimopoulos, M. Gatos, I. Pratikakis”Text Detection in video frames” accepted for publication in the proc. of the 11 th Pan. Hellenic Conference on Informatics (PCI 2007) , Patras, May 2007. Institute of Informatics and Telecommunications – NCSR “Demokritos”

Proposed methodology - Future work § Future work: Ø We plan to exploit the Proposed methodology - Future work § Future work: Ø We plan to exploit the color homogeneity of text Ø Temporal text detection from frame to frame. Ø Multi-frame integration for image enhancement Institute of Informatics and Telecommunications – NCSR “Demokritos”