Area 3: Multimedia Question-Answering and Dynamic Knowledge Organization

In recent years we have witnessed a flourishing of automated and community-based question answering services, which have emerged as an effective paradigm for oceanic information seeking, diverse knowledge disseminating, and outstanding expert routing. However, existing QA services usually provide only textual answers which are not intuitive and informative for many questions. On the other hand, the available amount of multimedia data on the internet has increased exponentially and has wide coverage on many practical topics. Naturally, it is time to shift the efforts from traditional text-based QA to multimedia QA.


To address the above problems, we focus on tackling the following key research issues:

  • Enrich textual QA with appropriate multimedia data by leveraging community-contributed knowledge to bridge the semantic gap between textual questions and media answers, as shown in Figure 3.1.
  • Predict the availability of multimedia answers on the web by jointly analyzing semantic cues and visual information, as illustrated in Figure 3.2.
  • Boost the search performance of complex queries generated from QA pairs to enhance relevant multimedia data selection.
  • Incorporate social features to better match questions to experts with first hand experience to provide the answers, so as to improve the response rates of experts (see Figure 3.3).
  • Generate content ontologies of QA portals flexibly with the aim of deriving dynamic and timely knowledge structures with contributions from the crowd sourcing and content experts.
  • Help users to learn, ask and find better questions and answers by leveraging on knowledge structures and QA archives.


To demonstrate the above capabilities, we will focus on several vertical domains such as medical, community health and product.




Figure 3.1: The schematic illustration of the proposed multimedia answering scheme.


Figure 3.2: The proposed framework for multimedia answer re-ranking for complex queries.


Figure 3.3: Question annotation framework for social QA.