OpenLiveQ (Open Live Test for Question Retrieval) is one of the core tasks in NTCIR-13, in which your question retrieval systems are evaluated in the production environment of Yahoo! Chiebukuro (a community Q&A service)
|Team Name||Description||Submission Time||nDCG@10|
|cdlab||#7||2017-03-26 11:47:00 UTC||0.37515|
|YJRS||BM25F, roughly optimized with CA where n = 3 and sf = 0.8 .||2017-03-26 03:21:10 UTC||0.34316|
|OKSAT||run0||2017-03-25 12:24:56 UTC||0.35451|
|cdlab||#6||2017-03-25 11:44:34 UTC||0.29530|
|YJRS||BM25F, roughly optimized with CA where n = 3 .||2017-03-25 03:00:59 UTC||0.33341|
|cdlab||#5||2017-03-24 11:02:16 UTC||0.37518|
|OKSAT||run4||2017-03-24 08:42:37 UTC||0.36388|
|Erler||Test||2017-03-24 08:03:40 UTC||0.40566|
|cdlab||#4||2017-03-23 02:48:00 UTC||0.37207|
|OKSAT||run3||2017-03-23 00:23:08 UTC||0.29426|
|TUA1||ubuntu14.04 amd64 test1||2017-03-22 12:11:49 UTC||0.37670|
|KUIDL||LambdaMART (without normalization)||2017-03-22 09:13:58 UTC||0.35788|
|ORG||Example result with Coordinate Ascent (with improved rel labels, no norm)||2017-03-22 09:02:12 UTC||0.35957|
|YJRS||Roughly optimized BM25F.||2017-03-22 01:55:44 UTC||0.33337|
|OKSAT||run2||2017-03-21 17:19:39 UTC||0.29214|
|KUIDL||LambdaMART (with smaller amount of training data)||2017-03-21 09:09:30 UTC||0.32683|
|ORG||Example result with Coordinate Ascent (with improved rel labels)||2017-03-21 08:53:33 UTC||0.36642|
|cdlab||#3||2017-03-20 16:22:53 UTC||0.26786|
|OKSAT||run1||2017-03-20 15:35:30 UTC||0.37083|
|YJRS||Naive BM25F.||2017-03-20 14:44:52 UTC||0.36452|
|cdlab||#2||2017-03-19 10:02:54 UTC||0.36321|
|KUIDL||LambdaMART||2017-03-19 01:31:07 UTC||0.34231|
|ORG||Example result with Coordinate Ascent||2017-03-19 01:10:23 UTC||0.41328|
|cdlab||#1||2017-03-18 05:40:24 UTC||0.33105|
|YJRS||Test run.||2017-03-17 06:46:18 UTC||0.34371|
|ORG||This is a sample (almost identical to the distributed file).||2017-02-20 09:50:32 UTC||0.35451|
OpenLiveQ (Open Live Test for Question Retrieval) provides an open live test environment in a community Q&A service of Yahoo Japan Corporation for evaluating question retrieval systems. We offer opportunities of more realistic system evaluation and help research groups address problems specific to real search systems in a production environment (e.g. ambiguous/underspecified queries and diverse relevance criteria). The task is simply defined as follows: given a query and a set of questions with answers, return a ranked list of questions.
Feb 28, 2017
|Registration due （ Registration at NTCIR-13 Web site ）*|
|Jan 1 - Mar 31, 2017||Offline test (evaluation with relevance judgment data) *|
|Apr - Jun, 2017||Online test (evaluation with real users) #|
|Jul 1, 2017||Online test result release #|
|Sep 1, 2017||Task overview paper (draft) release #|
|Oct 1, 2017||Task participant paper (draft) submission due *|
|Nov 1, 2017||Task participant paper (camera-ready) submission due *|
|Dec 5 - 8, 2017||NTCIR-13 Conference at NII, Tokyo, Japan *|
|* and # indicate schedules that should be done by participants and organizers, respectively.|
To participate in the NTCIR-13 OpenLiveQ task, please read through What participants must do .Please then take the following steps:
- Register through online registration
- Make two signed original copies of the user agreement forms
- Send the signed copies by postal mail or courier to the NTCIR Project Office
After the agreement is concluded, we will provide the information on how to download the data.
Participants can obtain the following data:
1,000 training and 1,000 test queries input into Yahoo! Chiebukuro search
- The clickthrough rate of each question in the SERP for each query
Demographics of users who clicked on each question
- Fraction of male and female
- Fraction of each age
- At most 1,000 questions with answers for each query, including information presented in the SERP (e.g. snippets)
A set of questions \(D_q \subset D\) (\(D\) is a set of all the questions) is given for each query \(q \in Q\). Only a task in OpenLiveQ is to rank questions in \(D_q\) for each query \(q\).
The input consists of queries and questions for each query.
Queries are included in file "OpenLiveQ-queries-test.tsv", in which each line contains a query. The file format is shown below:
[QueryID_i]is a query ID and
[Content_i]is a query string.
A set of all the questions are included in file "OpenLiveQ-questions-test.tsv", in which each line contains a pair of a query ID and a question ID. The file format is shown below:
where a pair of a query ID and a question ID indicates which documents correspond to a query, i.e. question \(d\) belongs to \(D_q\) for query \(q\). Line
[QuestionID_i_j]belongs to \(D_q\) for
Sample of Input
The output is a ranked list of questions for each query.
Ranked lists should be saved in a single file,
in which each line includes a pair of a query ID and a question ID.
The file format is shown below:
[Description]is a simple description about your system,
which should not include newline characters.
The content of the output file except for the first line
must be exactly the same as that of the question file
"OpenLiveQ-questions.tsv" except for the order of lines.
In the output file, line
[QueryID_i]\t[QuestionID_i_j]shown before line
[QueryID_i]\t[QuestionID_i_j']indicates that the rank of question
[QuestionID_i_j]is higher than that of question
Sample of Output
The output above represents the following ranked lists:
- OLQ-0001: q0000000001, q0000000000
- OLQ-0002: q0000000002, q0000000000
- OLQ-0003: q0000000004, q0000000003
To rank the questions, participants can leverage some resources such as training queries, training questions, question data including titles and body, and clickthrough data.
Training queries are included in file "OpenLiveQ-queries-train.tsv", and the file format is the same as that of "OpenLiveQ-queries-test.tsv".
Training questions are included in file "OpenLiveQ-questions-train.tsv", and the file format is the same as that of "OpenLiveQ-questions-test.tsv".
Information about all the questions as of December 1-9, 2016 are included in "OpenLiveQ-question-data.tsv", and each line of the file contains the following values of a question (values are separated by tabs):
- Query ID (a query for the question)
- Rank of the question in a Yahoo! Chiebukuro search result for the query of Query ID
- Question ID
- Title of the question
- Snippet of the question in a search result
- Status of the question (accepting answers, accepting votes, solved)
- Last update time of the question
- Number of answers for the question
- Page view of the question
- Category of the question
- Body of the question
- Body of the best answer for the question
Clickthrough data are available for some of the questions. Based on the clickthrough data, one can estimate the click probability of the questions, and understand what kinds of users click on a certain question. The clickthrough data were collected from August 24, 2016 to November 23, 2016. The clickthrough data are included in file "OpenLiveQ-clickthrough-data.tsv", and each line consits of the following values separated by tabs:
- Query ID (a query for the question)
- Question ID
- Most frequent rank of the question in a Yahoo! Chiebukuro search result for the query of Query ID
- Clickthrough rate
- Fraction of male users among those who clicked on the question
- Fraction of female users among those clicked on the question
- Fraction of users under 10 years old among those who clicked on the question
- Fraction of users in their 10s among those who clicked on the question
- Fraction of users in their 20s among those who clicked on the question
- Fraction of users in their 30s among those who clicked on the question
- Fraction of users in their 40s among those who clicked on the question
- Fraction of users in their 50s among those who clicked on the question
- Fraction of users over 60 years old among those who clicked on the question
The number of query-question pairs in the clickthrough data is 440,163. The question information can be found in "OpenLiveQ-question-data.tsv" for 390,502 query-question pairs, while it is not included for the other pairs.
Evaluation with relevance judgment data
Offline test is carried out before online test explained later, and determines participants whose systems are evaluated in the online test, based on results in the offline test. Evaluation is conducted in a similar way to traditional ad-hoc retrieval tasks, in which results are evaluated by relevance judgment results and evaluation metrics such as nDCG (normalized discounted cumulative gain), ERR (expected reciprocal rank), and Q-measure. During the offline test period, participants can submit their results once per day through this Web site, and obtain evaluation results right after the submission.
Relevance JudgmentTo simulate the online test in the offline test, we conduct relevance judgment with an instruction shown below: "Suppose you input query \(q\) and received a set of questions \(D_q\). Please select all the questions on which you want to click". Assessors are not present with the full content of each question, and requested to evaluate questions in a similar page to the real SERP in Yahoo! Chiebukuro. This type of relevance judgment is different from traditional ones, and expected to result in being similar to results of the online test. Multiple assessors are assigned to each query, and the relevance grade of each question is estimated as the number of assessors who select the question in the relevance judgment. For example, the relevance grade is 2 if two out of three assessors marked a question.
Evaluation MetricsThe following evaluation metrics are used in our plan:
- nDCG (normalized discounted cumulative gain)
- ERR (expected reciprocal rank)
SubmissionYou can submit your run by the following command in Linux or Mac environments:
curl http://www.openliveq.net/runs -X POST -H "Authorization:[AUTH_TOKEN]" -F run_file=@[PATH_TO_YOUR_RUN_FILE]
where [AUTH_TOKEN] is distributed only to participants.
curl http://www.openliveq.net/runs -X POST -H "Authorization:ORG:AABBCCDDEEFF" -F run_file=@data/your_run.tsv
Please note that
- It takes a few minutes to upload a run file,
- Each team is not allowed to submit two or more runs within 24 hours, and
- The submission deadline is March 31.
The evaluation result (nDCG@10) will be displayed on the top of this website. The top 10 teams in terms of nDCG@10 will be invited to the online evaluation. Details of evaluation results will be sent after the submission deadline.
Evaluation with real users
Submitted results are evaluated by multileaving1. At most 10 systems are selected by the results of the offline test, and evaluated in the online test. Submitted results are combined into a single SERP by the multileaving, presented to real users during the online test period, and evaluated on the basis of clicks observed. Results submitted in the offline test period are used in as-is in the online test. Note that some questions can be excluded in the online test if they are deleted for some reasons before or during the online test.
Note that the best result from each team at the offline evaluation will be used at the online evaluation.
1 Schuth et al. "Multileaved Comparisons for Fast Online Evaluation." CIKM 2014.
- Makoto P. Kato (Kyoto University)
- Takehiro Yamamoto (Kyoto University)
- Sumio Fujita (Yahoo Japan Corporation)
- Akiomi Nishida (Yahoo Japan Corporation)
- Tomohiro Manabe (Yahoo Japan Corporation)