Visual Commonsense Reasoning

On VCR, a model must not only answer commonsense visual questions, but also provide a rationale that explains why the answer is true.

Submitting to the leaderboard

Submission is easy! You just need to email Rowan with your predictions. Formatting instructions are below:

Please include in your email 1) a name for your model, 2) your team name, and optionally, 3) a github repo or paper link.

I'll try to get back to you within a few days, usually sooner. Teams can only submit results from a model once every 7 days.

What kinds of submissions are allowed?

The only constraint is that your system must predict the answer first, then the rationale. (The rationales were selected to be highly relevant to the correct Q,A pair, so they leak information about the correct answer.)

  • To deter this, the submission format involves submitting predictions for each possible rationale, conditioned on each possible answer.
  • A simple way of setting up the experiments (used in the paper) is to consider a task with query and four response choices. For Q->A the query is the question, and the response choices are the answers. For QA->R, the query is the question and answer, concatenated together, and the response choices are the rationales.


If it's not about something private, check out the google group below:

VCR Leaderboard

There are two different subtasks to VCR:

  • Question Answering (Q->A): In this setup, a model is provided a question, and has to pick the best answer out of four choices. Only one of the four is correct.
  • Answer Justification (QA->R): In this setup, a model is provided a question, along with the correct answer, and it has to justify it by picking the best rationale out of four choices.

We combine the two parts with the Q->AR metric, in which a model only gets a question right if it answers correctly and picks the right rationale. Models are evaluated in terms of accuracy (%). How well will your model do?

Rank Model Q->A QA->R Q->AR
Human Performance

University of Washington

(Zellers et al. '18)
91.0 93.0 85.0


April 20, 2019



Feb 19, 2019

Facebook AI Research



Feb 25, 2019

Peking University



Nov 28, 2018
Recognition to Cognition Networks

University of Washington


March 27, 2019
GS Reasoning

UC San Diego



Nov 28, 2018

Google AI Language (experiment by Rowan)


Nov 28, 2018

Seoul National University (experiment by Rowan)


March 29, 2019
Finetuned BERT-Large with Fixed ResNet 152

Google AI Language
Random Performance 25.0 25.0 6.2