This is an exploratory task on early risk detection of depression. The challenge consists of sequentially processing pieces of evidence and detect early traces of depression as soon as possible. The task is mainly concerned about evaluating Text Mining solutions and, thus, it concentrates on texts written in Social Media. Texts should be processed in the order they were created. In this way, systems that effectively perform this task could be applied to sequentially monitor user interactions in blogs, social networks, or other types of online media. The task is organized into two different stages:
(April 25th, 2017) The pilot task has finished! We received 30 contributions from 8 different institutions. The list of participants is shown below:
Institution | Submitted files | ||||
ENSEEIHT, France | GPLA | ||||
GPLB | |||||
GPLC | |||||
GPLD | |||||
FH Dortmund, Germany | FHDO-BCSGA | ||||
FHDO-BCSGB | |||||
FHDO-BCSGC | |||||
FHDO-BCSGD | |||||
FHDO-BCSGE | |||||
U. Arizona, USA | UArizonaA | ||||
UArizonaB | |||||
UArizonaC | |||||
UArizonaD | |||||
UArizonaE | |||||
U. Autónoma Metropolitana, Mexico | LyRA | ||||
LyRB | |||||
LyRC | |||||
LyRD | |||||
LyRE | |||||
U. Nacional de San Luis, Argentina | UNSLA | ||||
U. of Quebec in Montreal, Canada | UQAMA | ||||
UQAMB | |||||
UQAMC | |||||
UQAMD | |||||
UQAME | |||||
Instituto Nacional de Astrofísica, Optica y Electrónica, Mexico | CHEPEA | ||||
CHEPEB | |||||
CHEPEC | |||||
CHEPED | |||||
ISA FRCCSC RAS, Russia | NLPISA | ||||
We evaluated the contributed runs with Early Risk Detection Error (ERDE). This is an error measure that takes into account the accuracy of the decisions and the delay. More info about ERDE can be found in [Losada & Crestani 2016]. The following table reports the performance results (we also include standard classification metrics: F1, Precision, and Recall). Looking forward to knowing about the specifics of each early detection algorithm!
ERDE5 | ERDE50 | F1 | P | R | |
GPLA | 17.33% | 15.83% | 0.35 | 0.22 | 0.75 |
GPLB | 19.14% | 17.15% | 0.30 | 0.18 | 0.83 |
GPLC | 14.06% | 12.14% | 0.46 | 0.42 | 0.50 |
GPLD | 14.52% | 12.78% | 0.47 | 0.39 | 0.60 |
FHDO-BCSGA | 12.82% | 9.69% | 0.64 | 0.61 | 0.67 |
FHDO-BCSGB | 12.70% | 10.39% | 0.55 | 0.69 | 0.46 |
FHDO-BCSGC | 13.24% | 10.56% | 0.56 | 0.57 | 0.56 |
FHDO-BCSGD | 13.04% | 10.53% | 0.57 | 0.63 | 0.52 |
FHDO-BCSGE | 14.16% | 12.42% | 0.60 | 0.51 | 0.73 |
UArizonaA | 14.62% | 12.68% | 0.40 | 0.31 | 0.58 |
UArizonaB | 13.07% | 11.63% | 0.30 | 0.33 | 0.27 |
UArizonaC | 17.93% | 12.74% | 0.34 | 0.21 | 0.92 |
UArizonaD | 14.73% | 10.23% | 0.45 | 0.32 | 0.79 |
UArizonaE | 14.93% | 12.01% | 0.45 | 0.34 | 0.63 |
LyRA | 15.65% | 15.15% | 0.14 | 0.11 | 0.19 |
LyRB | 16.75% | 15.76% | 0.16 | 0.11 | 0.29 |
LyRC | 16.14% | 15.51% | 0.16 | 0.12 | 0.25 |
LyRD | 14.97% | 14.47% | 0.15 | 0.13 | 0.17 |
LyRE | 13.74% | 13.74% | 0.08 | 0.11 | 0.06 |
UNSLA | 13.66% | 9.68% | 0.59 | 0.48 | 0.79 |
UQAMA | 14.03% | 12.29% | 0.53 | 0.48 | 0.60 |
UQAMB | 13.78% | 12.78% | 0.48 | 0.49 | 0.46 |
UQAMC | 13.58% | 12.83% | 0.42 | 0.50 | 0.37 |
UQAMD | 13.23% | 11.98% | 0.38 | 0.64 | 0.27 |
UQAME | 13.68% | 12.68% | 0.39 | 0.45 | 0.35 | CHEPEA | 14.75% | 12.26% | 0.48 | 0.38 | 0.65 | CHEPEB | 14.78% | 12.29% | 0.47 | 0.37 | 0.63 | CHEPEC | 14.81% | 12.57% | 0.46 | 0.37 | 0.63 | CHEPED | 14.81% | 12.57% | 0.45 | 0.36 | 0.62 | NLPISA | 15.59% | 15.59% | 0.15 | 0.12 | 0.21 |
The training data have been sent to all registered participants on Nov 30th, 2016.
The training data contain the following components:
This is the training data and, therefore, you get all chunks now. But you should adapt your algorithms in a way that the chunks are processed according to the sequence (for example, don't process chunk3 if you have not processed chunk1 and chunk2).
SCRIPTS FOR EVALUATION:
To facilitate your experiments, we provide two scripts that could be of help during the training stage. These scripts are in the scripts evaluation folder.
We recommend you to follow these steps:
use your early detection algorithm to process chunk1 files and produce your first output file (e.g. usc_1.txt). This file should follow the format described in the instructions for test (see the "Test" tab: 0/1/2 for each subject).
Do the same for all the chunki files (i: 2, ..., 10). When you process chunki files it is OK to use information from chunkj files (for j<=i). Note that the chunkj files (such that j=1...i) contain all posts/comments that you have seen after the ith release of data.
you now have your 10 output files (e.g. usc_1.txt ... usc_10.txt). as argued above, you need to take a decision on every subject (you cannot say 0 all the time). so, every subject needs to have 1/2 assigned in some of your output files.
use the aggregate_results.py to combine your output files into a global output file. This aggregation script has two inputs: 1) the folder where you have your 10 output files and 2) the path to the file writings_per_subject_all_train.txt. The writings_per_subject_all_train.txt file stores the number of writings per subject. This is required because we need to know how many writings where needed to take each decision. For instance, if subject_k has a total number of 500 writings in the collection then every chunk has 50 writings from subject_k. If your team needed 2 chunks to make a decision on subject_k then we will store 100 as number of writings that you needed to take this decision.
Example of usage: $ python aggregate_results.py -path path to the folder where you have your 10 files -wsource path to the writings_per_subject_all_train.txt file
This scripts creates a file, e.g. usc_global.txt, which stores your final decision on every subject and the number of writings that you saw before making the decision.
get the final performance results from the erisk_eval.py script. It has three inputs: a) path to the golden truth file (risk_golden_truth.txt), b) path to the overall output file, and c) value of o (delay parameter of the ERDE metric).
Example: $ python erisk_eval.py -gpath path to the risk_golden_truth.txt file -ppath path to the overall output file -o value of ERDE delay parameter
Example: $ python erisk_eval.py -gpath ../risk_golden_truth.txt -ppath ../folder/usc_global.txt -o 5
At test time, we will first release chunk1 for the test subjects and ask you for your output. A few days later, we will release chunk2, and so forth. The format required for the output file to be sent after each release of test data will be the following:
IMPORTANT NOTE: You have to put exactly two tabs between the subject name and the CODE (otherwise, the python evaluation script does not work!!!)
test_subject_idn is the id of the test_subject (ID field in the XML files)
CODE is your decision about the subject, three possible values:
If you emit a decision on a subject then any future decision on the same subject will be ignored. For simplicity, you can include all subjects in all your submitted files but, for each user, your algorithm will be evaluated based on the first file that contains a decision on the subject. And you cannot say 0 all the time: at some point you need to make a decision on every subject (i.e. at the latest, after the 10th chunk, you need to emit your decision).
If a team does not submit the required file before the deadline then we'll take the previous file from the same team and assume that all things stay the same (no new emissions for this round).
If a team does not submit the file after the first round then we´ll assume that the team does not take any decision (all subjects set to 0 -no decision- ).
Each team can experiment with several models for this task and submit up to 5 files for each round. If you test different models then the files should be named: ORGA_n.txt (decisions after the nth chunk by model A), ORGB_n.txt (decisions after the nth chunk by model B), etc.
More info: [Losada & Crestani 2016]