Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A multi-modal customer service automatic reply method and system

An automatic reply and multi-modal technology, applied in character and pattern recognition, instruments, computing, etc., can solve problems such as ignoring attribute information, and achieve the effect of satisfying users' wishes

Active Publication Date: 2021-08-24
SHANDONG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing methods only consider visual images during selection, but completely ignore the rich attribute information associated with items, such as price, material, size, and style;

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-modal customer service automatic reply method and system
  • A multi-modal customer service automatic reply method and system
  • A multi-modal customer service automatic reply method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0035] This embodiment discloses a multi-modal dialogue system with an adaptive decoder. Include the following steps:

[0036] Step 1: Receive the dialogue and encode it to get the context vector;

[0037] The encoding employs a context encoder. The context encoder includes: a low-level, word-level recurrent neural network and a residual network enhanced with soft visual attention, and a high-level, sentence-level recurrent neural network.

[0038] Specifically, at a low level, the input text utterance is encoded word-by-word by a word-level recurrent neural network, and the final hidden state that embeds the entire utterance information is regarded as a representation of the input text utterance. It is worth noting that utterances can be textual or multi-modal. As for the extraction of visual features, considering the difference in the visual attention of users to image regions, image utterances of commodities use soft visual attention-enhanced residual Differential networ...

Embodiment 2

[0058] The purpose of this embodiment is to provide a multi-modal customer service automatic reply system.

[0059] In order to achieve the above purpose, this embodiment provides a multi-modal customer service automatic reply system, including:

[0060] The context encoder receives and encodes the utterance to obtain the context vector;

[0061] The intent type identification module, based on the context vector, determines its corresponding intent category based on the pre-trained intent category identification model;

[0062] A reply category determining module, which determines the reply category corresponding to the intention based on the set rules;

[0063] The reply generation module takes the context vector as input according to the reply category, and generates a corresponding reply by using a pre-trained reply model.

Embodiment 3

[0065] The purpose of this embodiment is to provide an electronic device.

[0066] In order to achieve the above object, this embodiment provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, it realizes:

[0067] Receive the utterance and encode it to get the context vector;

[0068] Based on the context vector, the corresponding intent category is determined based on the pre-trained intent category recognition model;

[0069] Determine the reply category corresponding to the intent based on the set rules;

[0070] According to the reply category, the context vector is used as an input, and a pre-trained reply model is used to generate a corresponding reply.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal customer service automatic reply method and system. The method includes the following steps: receiving and encoding a speech to obtain a context vector; based on the context vector, determining its corresponding intention based on a pre-trained intention category recognition model category; determine the reply category corresponding to the intention based on the set rules; according to the reply category, use the context vector as input, and use a pre-trained reply model to generate a corresponding reply. The present invention is fully capable of automatically recognizing the user's intention according to the user's utterance, and adaptively generating replies in various forms.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence, and in particular relates to a multi-modal customer service automatic reply method and system. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] Multimodal dialogue systems, based on text-based dialogue systems, have recently received increasing attention in different domains, especially retail. Although existing task-oriented multimodal dialogue systems have shown promising performance, they still suffer from the following problems: [0004] The chatbot's reply uses different media forms to express various information, such as product display, product introduction, daily greetings, etc., which are often expressed by combining text or text images. Existing methods treat text generation and image selection in multimodal dialogue systems as two i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/9032G06F40/30G06K9/62
CPCG06F16/90332G06F40/30G06F18/214
Inventor 聂礼强王文杰王英龙姚一杨张化祥宋雪萌
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products