Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for translation of characters in picture

A technology for text in pictures, applied in the field of image processing, can solve the problems of inconvenient extraction and difficult to preserve the typesetting format of pictures, and achieve the effect of improving translation accuracy, easy implementation and easy operation.

Active Publication Date: 2016-07-13
SHANDONG UNIV
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When it is necessary to translate the text in the picture, it is not easy to extract the text in the picture for translation, and it is difficult to retain the typesetting format in the original picture

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for translation of characters in picture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0029] Such as figure 1 shown.

[0030] A method for translating text in a picture, comprising the following steps:

[0031] 1) Image preprocessing: denoise the image, align text content, and adjust contrast. Pictures from scanners or cameras generally contain noise points, the text content may be skewed, and the brightness and contrast of the pictures are also quite different. In order to improve the accuracy of subsequent text recognition, it is necessary to preprocess the picture to remove the noise points in the picture, correct the upper and lower edges of the picture to be in a horizontal position and correct the text lines in the picture to keep them horizontal, and adjust the contrast to make the text in the picture and The background can be clearly distinguished.

[0032] 2) Text area detection: In the picture, the position and size of the text area are not fixed, and the detector generated by the machine learning method detects and marks the text area and non-text...

Embodiment 2

[0042] According to the method for translating text in a picture described in embodiment 1, the difference is that the detection method of the text area in the step 2) is based on the Soft-Cascade algorithm of AdaBoost. The Soft-Cascade algorithm based on AdaBoost uses several weak classifiers to generate a strong classifier, cascades the weak classifiers, and sets the detection threshold at each level to quickly detect and reject negative samples to speed up the detection. Among them, the AdaBoost algorithm is an algorithm that trains different weak classifiers for the same training set, and combines these weak classifiers according to certain rules, and finally forms a strong classifier. A weak classifier refers to a classification accuracy rate slightly higher than 50%, that is, the accuracy rate is only slightly better than a random guessing classifier, and the final strong classifier can obtain a higher accuracy rate, and its performance is far better than any single weak...

Embodiment 3

[0044] According to the method for translating text in a picture described in embodiment 1, the difference is that the specific method of machine translation in said step 4) is to call the API of Baidu translation to obtain preliminary results, and then adjust the preliminary results by manual translation .

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a method for translation of characters in a picture. The method provided by the invention is configured to perform detection and OCR identification of the characters in a document through adoption of a machine learning method, and is able to perform machine translation of the characters and set the corresponding translation accuracy confidence, perform human translation in a later period and mark the translation with different translation accuracy confidences with different background colors for distinguishing so as to improve the translation accuracy. The method for translation of characters in a picture provides a mode for remaining the original scanning file picture format and only performing a plurality of translation modes through identification of character regions and contents in the picture, therefore the translation accuracy is high, and the operation is easy to realize.

Description

technical field [0001] The invention relates to a method for translating characters in pictures, which belongs to the technical field of image processing. Background technique [0002] In a modern society where internationalization is becoming more and more popular and information exchange is becoming more and more frequent, in many cases we need to translate the text content in some documents, scanned copies of documents or pictures with specific text formats between languages . In the prior art, there are relatively mature technologies and software to realize the translation of text; but for the text in the picture, especially for the text in the picture with a specific format, it is usually still necessary to rely on manual translation and re-save the file. Format, the translation of the text content in the picture has therefore become cumbersome and inconvenient. [0003] Scanned files are generally saved in image format, which contains specific text and specific types...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/00G06K9/32G06K9/34G06K9/62G06F17/28
CPCG06F40/58G06V10/243G06V30/153G06V30/10G06F18/214G06T3/04
Inventor 王洪君孙健琳于光玉刘珂王小飞
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products