Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Translation method based on multi-modal machine translation model

A machine translation, multimodal technology, applied in the field of machine translation, can solve problems such as too many parameters, incompetence for multimodal translation tasks, inability to capture features, etc., to achieve the effect of improving performance

Active Publication Date: 2020-11-20
XIAMEN UNIV
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] In related technologies, the existing multimodal machine translation methods usually use image features to be translated as global information, and use attention mechanisms to dynamically extract image context features to learn multimodal joint representations, but image features are used as global information And the method of learning multi-modal joint representation cannot capture the characteristics of dynamic generation in the translation process; and the single-attention mechanism cannot handle complex multi-modal translation tasks, and the multi-attention mechanism faces the problem of too many parameters. As a result, overfitting problems occur, which in turn greatly reduces the translation performance of multimodal machines

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Translation method based on multi-modal machine translation model
  • Translation method based on multi-modal machine translation model
  • Translation method based on multi-modal machine translation model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary and are intended to explain the present invention and should not be construed as limiting the present invention.

[0036] In order to better understand the above technical solutions, the following will describe exemplary embodiments of the present invention in more detail with reference to the accompanying drawings. Although exemplary embodiments of the present invention are shown in the drawings, it should be understood that the invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided for more thorough understanding of the present invention and to fully c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a translation method based on a multi-modal machine translation model, which comprises the following steps: obtaining a source end sentence and a corresponding translation image, and preprocessing the source end sentence and the translation image to obtain the processed source end sentence, the global feature of the translation image and the local feature of the translationimage; establishing a multi-modal machine translation model, and training the multi-modal machine translation model according to the multi-modal machine translation model, the multi-modal machine translation model comprising an encoder and a decoder, and the decoder comprising a context-guided capsule network; translating the processed to-be-translated source end sentence and the corresponding translation image based on a trained multi-modal machine translation model to generate a target end sentence corresponding to the to-be-translated source end sentence. Therefore, the context is introduced into the decoder of the multi-modal machine translation model to guide the capsule network to translate, and introduction of a large number of parameters can be avoided while rich multi-modal representation is dynamically generated, so that the performance of multi-modal machine translation is effectively improved.

Description

technical field [0001] The present invention relates to the technical field of machine translation, in particular to a translation method based on a multimodal machine translation model, a computer-readable storage medium and a computer device. Background technique [0002] In related technologies, the existing multimodal machine translation methods usually use image features to be translated as global information, and use attention mechanisms to dynamically extract image context features to learn multimodal joint representations, but image features are used as global information And the method of learning multi-modal joint representation cannot capture the characteristics of dynamic generation in the translation process; and the single-attention mechanism cannot handle complex multi-modal translation tasks, and the multi-attention mechanism faces the problem of too many parameters. As a result, overfitting problems arise, which in turn greatly reduces the translation perfor...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/58G06N3/04G06N3/08
CPCG06F40/58G06N3/08G06N3/048G06N3/045
Inventor 苏劲松林欢尹永竞周楚伦姚俊峰
Owner XIAMEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products