Online Evaluations of Features and Ranking Models for Question Retrieval
Tomohiro Manabe, Sumio Fujita, and Akiomi Nishida
NTCIR-14 Post-Conference Proceedings, 2019/12
情報検索 (Information Retrieval)
- We report our work on the NTCIR-14 OpenLiveQ-2 task. From the given data set for question retrieval on a community QA service, we extracted several BM25F-like features and translation-based features in addition to basic features such as TF, TFIDF, and BM25 and then constructed multiple ranking models with the feature sets. In the first stage of online evaluation, our linear models with the BM25F-like and translation-based features obtained the highest amount of credit among 61 methods including other teams' methods and a snapshot of the current ranking in service. In the second stage, our neural ranking models with basic features consistently obtained a major amount of credit among 30 methods in a statistically significant high number of page views. These online evaluation results demonstrate that neural ranking is one of the most promising approaches to improve the service.
Online Evaluations of Features and Ranking Models for Question Retrieval（外部サイト／External Site Link）