Dual Path Multi-Modal High-Order Features for Textual Content based Visual Question Answering

YEAR :2023

Description

Abstract

As a typical cross-modal problem, visual question answering (VQA) has received increasing attention from the communities of computer vision and natural language processing. Reading and reasoning about texts and visual contents in the images is a burgeoning and important research topic in VQA, especially for the visually impaired assistance applications. Given an image, it aims to predict an answer to a provided natural language question closely related to its textual contents. In this project, we propose a novel end-to-end textual content based VQA model, which grounds question answering both on the visual and textual information. After encoding the image, question and recognized text words, it uses multi-modal factorized high-order modules and the attention mechanism to fuse question-image and question-text features respectively. The complex correlations among different features can be captured efficiently. To ensure the model’s extendibility, it embeds candidate answers and recognized texts in a semantic embedding space and adopts semantic embedding based classifier to perform answer prediction. Extensive experiments on the newly proposed benchmark Text VQA demonstrate that the proposed model can achieve promising results.

Reviews

There are no reviews yet.

Be the first to review “Dual Path Multi-Modal High-Order Features for Textual Content based Visual Question Answering”

Your email address will not be published. Required fields are marked *

Product Enquiry